WorldWideScience

Sample records for reliable velocity model

  1. The Reliability of Individualized Load-Velocity Profiles.

    Science.gov (United States)

    Banyard, Harry G; Nosaka, K; Vernon, Alex D; Haff, G Gregory

    2017-11-15

    This study examined the reliability of peak velocity (PV), mean propulsive velocity (MPV), and mean velocity (MV) in the development of load-velocity profiles (LVP) in the full depth free-weight back squat performed with maximal concentric effort. Eighteen resistance-trained men performed a baseline one-repetition maximum (1RM) back squat trial and three subsequent 1RM trials used for reliability analyses, with 48-hours interval between trials. 1RM trials comprised lifts from six relative loads including 20, 40, 60, 80, 90, and 100% 1RM. Individualized LVPs for PV, MPV, or MV were derived from loads that were highly reliable based on the following criteria: intra-class correlation coefficient (ICC) >0.70, coefficient of variation (CV) ≤10%, and Cohen's d effect size (ES) 0.05) between trials, movement velocities, or between linear regression versus second order polynomial fits. PV 20-100% , MPV 20-90% , and MV 20-90% are reliable and can be utilized to develop LVPs using linear regression. Conceptually, LVPs can be used to monitor changes in movement velocity and employed as a method for adjusting sessional training loads according to daily readiness.

  2. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    Science.gov (United States)

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  3. Reliability of force-velocity relationships during deadlift high pull.

    Science.gov (United States)

    Lu, Wei; Boyas, Sébastien; Jubeau, Marc; Rahmani, Abderrahmane

    2017-11-13

    This study aimed to evaluate the within- and between-session reliability of force, velocity and power performances and to assess the force-velocity relationship during the deadlift high pull (DHP). Nine participants performed two identical sessions of DHP with loads ranging from 30 to 70% of body mass. The force was measured by a force plate under the participants' feet. The velocity of the 'body + lifted mass' system was calculated by integrating the acceleration and the power was calculated as the product of force and velocity. The force-velocity relationships were obtained from linear regression of both mean and peak values of force and velocity. The within- and between-session reliability was evaluated by using coefficients of variation (CV) and intraclass correlation coefficients (ICC). Results showed that DHP force-velocity relationships were significantly linear (R² > 0.90, p  0.94), mean and peak velocities showed a good agreement (CV reliable and can therefore be utilised as a tool to characterise individuals' muscular profiles.

  4. Mean Velocity vs. Mean Propulsive Velocity vs. Peak Velocity: Which Variable Determines Bench Press Relative Load With Higher Reliability?

    Science.gov (United States)

    García-Ramos, Amador; Pestaña-Melero, Francisco L; Pérez-Castilla, Alejandro; Rojas, Francisco J; Gregory Haff, G

    2018-05-01

    García-Ramos, A, Pestaña-Melero, FL, Pérez-Castilla, A, Rojas, FJ, and Haff, GG. Mean velocity vs. mean propulsive velocity vs. peak velocity: which variable determines bench press relative load with higher reliability? J Strength Cond Res 32(5): 1273-1279, 2018-This study aimed to compare between 3 velocity variables (mean velocity [MV], mean propulsive velocity [MPV], and peak velocity [PV]): (a) the linearity of the load-velocity relationship, (b) the accuracy of general regression equations to predict relative load (%1RM), and (c) the between-session reliability of the velocity attained at each percentage of the 1-repetition maximum (%1RM). The full load-velocity relationship of 30 men was evaluated by means of linear regression models in the concentric-only and eccentric-concentric bench press throw (BPT) variants performed with a Smith machine. The 2 sessions of each BPT variant were performed within the same week separated by 48-72 hours. The main findings were as follows: (a) the MV showed the strongest linearity of the load-velocity relationship (median r = 0.989 for concentric-only BPT and 0.993 for eccentric-concentric BPT), followed by MPV (median r = 0.983 for concentric-only BPT and 0.980 for eccentric-concentric BPT), and finally PV (median r = 0.974 for concentric-only BPT and 0.969 for eccentric-concentric BPT); (b) the accuracy of the general regression equations to predict relative load (%1RM) from movement velocity was higher for MV (SEE = 3.80-4.76%1RM) than for MPV (SEE = 4.91-5.56%1RM) and PV (SEE = 5.36-5.77%1RM); and (c) the PV showed the lowest within-subjects coefficient of variation (3.50%-3.87%), followed by MV (4.05%-4.93%), and finally MPV (5.11%-6.03%). Taken together, these results suggest that the MV could be the most appropriate variable for monitoring the relative load (%1RM) in the BPT exercise performed in a Smith machine.

  5. Test-retest reliability of barbell velocity during the free-weight bench-press exercise.

    Science.gov (United States)

    Stock, Matt S; Beck, Travis W; DeFreitas, Jason M; Dillon, Michael A

    2011-01-01

    The purpose of this study was to calculate test-retest reliability statistics for peak barbell velocity during the free-weight bench-press exercise for loads corresponding to 10-90% of the 1-repetition maximum (1RM). Twenty-one healthy, resistance-trained men (mean ± SD age = 23.5 ± 2.7 years; body mass = 90.5 ± 14.6 kg; 1RM bench press = 125.4 ± 18.4 kg) volunteered for this study. A minimum of 48 hours after a maximal strength testing and familiarization session, the subjects performed single repetitions of the free-weight bench-press exercise at each tenth percentile (10-90%) of the 1RM on 2 separate occasions. For each repetition, the subjects were instructed to press the barbell as rapidly as possible, and peak barbell velocity was measured with a Tendo Weightlifting Analyzer. The test-retest intraclass correlation coefficients (model 2,1) and corresponding standard errors of measurement (expressed as percentages of the mean barbell velocity values) were 0.717 (4.2%), 0.572 (5.0%), 0.805 (3.1%), 0.669 (4.7%), 0.790 (4.6%), 0.785 (4.8%), 0.811 (5.8%), 0.714 (10.3%), and 0.594 (12.6%) for the weights corresponding to 10-90% 1RM. There were no mean differences between the barbell velocity values from trials 1 and 2. These results indicated moderate to high test-retest reliability for barbell velocity from 10 to 70% 1RM but decreased consistency at 80 and 90% 1RM. When examining barbell velocity during the free-weight bench-press exercise, greater measurement error must be overcome at 80 and 90% 1RM to be confident that an observed change is meaningful.

  6. Reliability of power and velocity variables collected during the traditional and ballistic bench press exercise.

    Science.gov (United States)

    García-Ramos, Amador; Haff, G Gregory; Padial, Paulino; Feriche, Belén

    2018-03-01

    This study aimed to examine the reliability of different power and velocity variables during the Smith machine bench press (BP) and bench press throw (BPT) exercises. Twenty-two healthy men conducted four testing sessions after a preliminary BP one-repetition maximum (1RM) test. In a counterbalanced order, participants performed two sessions of BP in one week and two sessions of BPT in another week. Mean propulsive power, peak power, mean propulsive velocity, and peak velocity at each tenth percentile (20-70% of 1RM) were recorded by a linear transducer. The within-participants coefficient of variation (CV) was higher for the load-power relationship compared to the load-velocity relationship in both the BP (5.3% vs. 4.1%; CV ratio = 1.29) and BPT (4.7% vs. 3.4%; CV ratio = 1.38). Mean propulsive variables showed lower reliability than peak variables in both the BP (5.4% vs. 4.0%, CV ratio = 1.35) and BPT (4.8% vs. 3.3%, CV ratio = 1.45). All variables were deemed reliable, with the peak velocity demonstrating the lowest within-participants CV. Based upon these findings, the peak velocity should be chosen for the accurate assessment of BP and BPT performance.

  7. Reliability of performance velocity for jump squats under feedback and nonfeedback conditions.

    Science.gov (United States)

    Randell, Aaron D; Cronin, John B; Keogh, Justin Wl; Gill, Nicholas D; Pedersen, Murray C

    2011-12-01

    Randell, AD, Cronin, JB, Keogh, JWL, Gill, ND, and Pedersen, MC. Reliability of performance velocity for jump squats under feedback and nonfeedback conditions. J Strength Cond Res 25(12): 3514-3518, 2011-Advancements in the monitoring of kinematic and kinetic variables during resistance training have resulted in the ability to continuously monitor performance and provide feedback during training. If equipment and software can provide reliable instantaneous feedback related to the variable of interest during training, it is thought that this may result in goal-oriented movement tasks that increase the likelihood of transference to on-field performance or at the very least improve the mechanical variable of interest. The purpose of this study was to determine the reliability of performance velocity for jump squats under feedback and nonfeedback conditions over 3 consecutive training sessions. Twenty subjects were randomly allocated to a feedback or nonfeedback group, and each group performed a total of 3 "jump squat" training sessions with the velocity of each repetition measured using a linear position transducer. There was less change in mean velocities between sessions 1-2 and sessions 2-3 (0.07 and 0.02 vs. 0.13 and -0.04 m·s), less random variation (TE = 0.06 and 0.06 vs. 0.10 and 0.07 m·s) and greater consistency (intraclass correlation coefficient = 0.83 and 0.87 vs. 0.53 and 0.74) between sessions for the feedback condition as compared to the nonfeedback condition. It was concluded that there is approximately a 50-50 probability that the provision of feedback was beneficial to the performance in the squat jump over multiple sessions. It is suggested that this has the potential for increasing transference to on-field performance or at the very least improving the mechanical variable of interest.

  8. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    Science.gov (United States)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications

  9. An investigation of FLUENT's fan model including the effect of swirl velocity

    International Nuclear Information System (INIS)

    El Saheli, A.; Barron, R.M.

    2002-01-01

    The purpose of this paper is to investigate and discuss the reliability of simplified models for the computational fluid dynamics (CFD) simulation of air flow through automotive engine cooling fans. One of the most widely used simplified fan models in industry is a variant of the actuator disk model which is available in most commercial CFD software, such as FLUENT. In this model, the fan is replaced by an infinitely thin surface on which pressure rise across the fan is specified as a polynomial function of normal velocity or flow rate. The advantages of this model are that it is simple, it accurately predicts the pressure rise through the fan and the axial velocity, and it is robust

  10. Validity and Reliability of the PUSH Wearable Device to Measure Movement Velocity During the Back Squat Exercise.

    Science.gov (United States)

    Balsalobre-Fernández, Carlos; Kuzdub, Matt; Poveda-Ortiz, Pedro; Campo-Vecino, Juan Del

    2016-07-01

    Balsalobre-Fernández, C, Kuzdub, M, Poveda-Ortiz, P, and Campo-Vecino, Jd. Validity and reliability of the PUSH wearable device to measure movement velocity during the back squat exercise. J Strength Cond Res 30(7): 1968-1974, 2016-The purpose of this study was to analyze the validity and reliability of a wearable device to measure movement velocity during the back squat exercise. To do this, 10 recreationally active healthy men (age = 23.4 ± 5.2 years; back squat 1 repetition maximum [1RM] = 83 ± 8.2 kg) performed 3 repetitions of the back squat exercise with 5 different loads ranging from 25 to 85% 1RM on a Smith Machine. Movement velocity for each of the total 150 repetitions was simultaneously recorded using the T-Force linear transducer (LT) and the PUSH wearable band. Results showed a high correlation between the LT and the wearable device mean (r = 0.85; standard error of estimate [SEE] = 0.08 m·s) and peak velocity (r = 0.91, SEE = 0.1 m·s). Moreover, there was a very high agreement between these 2 devices for the measurement of mean (intraclass correlation coefficient [ICC] = 0.907) and peak velocity (ICC = 0.944), although a systematic bias between devices was observed (PUSH peak velocity being -0.07 ± 0.1 m·s lower, p ≤ 0.05). When measuring the 3 repetitions with each load, both devices displayed almost equal reliability (Test-retest reliability: LT [r = 0.98], PUSH [r = 0.956]; ICC: LT [ICC = 0.989], PUSH [ICC = 0.981]; coefficient of variation [CV]: LT [CV = 4.2%], PUSH [CV = 5.0%]). Finally, individual load-velocity relationships measured with both the LT (R = 0.96) and the PUSH wearable device (R = 0.94) showed similar, very high coefficients of determination. In conclusion, these results support the use of an affordable wearable device to track velocity during back squat training. Wearable devices, such as the one in this study, could have valuable practical applications for strength and conditioning coaches.

  11. Reliability and Validity of the Load-Velocity Relationship to Predict the 1RM Back Squat.

    Science.gov (United States)

    Banyard, Harry G; Nosaka, Kazunori; Haff, G Gregory

    2017-07-01

    Banyard, HG, Nosaka, K, and Haff, GG. Reliability and validity of the load-velocity relationship to predict the 1RM back squat. J Strength Cond Res 31(7): 1897-1904, 2017-This study investigated the reliability and validity of the load-velocity relationship to predict the free-weight back squat one repetition maximum (1RM). Seventeen strength-trained males performed three 1RM assessments on 3 separate days. All repetitions were performed to full depth with maximal concentric effort. Predicted 1RMs were calculated by entering the mean concentric velocity of the 1RM (V1RM) into an individualized linear regression equation, which was derived from the load-velocity relationship of 3 (20, 40, 60% of 1RM), 4 (20, 40, 60, 80% of 1RM), or 5 (20, 40, 60, 80, 90% of 1RM) incremental warm-up sets. The actual 1RM (140.3 ± 27.2 kg) was very stable between 3 trials (ICC = 0.99; SEM = 2.9 kg; CV = 2.1%; ES = 0.11). Predicted 1RM from 5 warm-up sets up to and including 90% of 1RM was the most reliable (ICC = 0.92; SEM = 8.6 kg; CV = 5.7%; ES = -0.02) and valid (r = 0.93; SEE = 10.6 kg; CV = 7.4%; ES = 0.71) of the predicted 1RM methods. However, all predicted 1RMs were significantly different (p ≤ 0.05; ES = 0.71-1.04) from the actual 1RM. Individual variation for the actual 1RM was small between trials ranging from -5.6 to 4.8% compared with the most accurate predictive method up to 90% of 1RM, which was more variable (-5.5 to 27.8%). Importantly, the V1RM (0.24 ± 0.06 m·s) was unreliable between trials (ICC = 0.42; SEM = 0.05 m·s; CV = 22.5%; ES = 0.14). The load-velocity relationship for the full depth free-weight back squat showed moderate reliability and validity but could not accurately predict 1RM, which was stable between trials. Thus, the load-velocity relationship 1RM prediction method used in this study cannot accurately modify sessional training loads because of large V1RM variability.

  12. Evaluation of force-velocity and power-velocity relationship of arm muscles.

    Science.gov (United States)

    Sreckovic, Sreten; Cuk, Ivan; Djuric, Sasa; Nedeljkovic, Aleksandar; Mirkov, Dragan; Jaric, Slobodan

    2015-08-01

    A number of recent studies have revealed an approximately linear force-velocity (F-V) and, consequently, a parabolic power-velocity (P-V) relationship of multi-joint tasks. However, the measurement characteristics of their parameters have been neglected, particularly those regarding arm muscles, which could be a problem for using the linear F-V model in both research and routine testing. Therefore, the aims of the present study were to evaluate the strength, shape, reliability, and concurrent validity of the F-V relationship of arm muscles. Twelve healthy participants performed maximum bench press throws against loads ranging from 20 to 70 % of their maximum strength, and linear regression model was applied on the obtained range of F and V data. One-repetition maximum bench press and medicine ball throw tests were also conducted. The observed individual F-V relationships were exceptionally strong (r = 0.96-0.99; all P stronger relationships. The reliability of parameters obtained from the linear F-V regressions proved to be mainly high (ICC > 0.80), while their concurrent validity regarding directly measured F, P, and V ranged from high (for maximum F) to medium-to-low (for maximum P and V). The findings add to the evidence that the linear F-V and, consequently, parabolic P-V models could be used to study the mechanical properties of muscular systems, as well as to design a relatively simple, reliable, and ecologically valid routine test of the muscle ability of force, power, and velocity production.

  13. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  14. Uncertainty assessment of 3D instantaneous velocity model from stack velocities

    Science.gov (United States)

    Emanuele Maesano, Francesco; D'Ambrogi, Chiara

    2015-04-01

    3D modelling is a powerful tool that is experiencing increasing applications in data analysis and dissemination. At the same time the need of quantitative uncertainty evaluation is strongly requested in many aspects of the geological sciences and by the stakeholders. In many cases the starting point for 3D model building is the interpretation of seismic profiles that provide indirect information about the geology of the subsurface in the domain of time. The most problematic step in the 3D modelling construction is the conversion of the horizons and faults interpreted in time domain to the depth domain. In this step the dominant variable that could lead to significantly different results is the velocity. The knowledge of the subsurface velocities is related mainly to punctual data (sonic logs) that are often sparsely distributed in the areas covered by the seismic interpretation. The extrapolation of velocity information to wide extended horizons is thus a critical step to obtain a 3D model in depth that can be used for predictive purpose. In the EU-funded GeoMol Project, the availability of a dense network of seismic lines (confidentially provided by ENI S.p.A.) in the Central Po Plain, is paired with the presence of 136 well logs, but few of them have sonic logs and in some portion of the area the wells are very widely spaced. The depth conversion of the 3D model in time domain has been performed testing different strategies for the use and the interpolation of velocity data. The final model has been obtained using a 4 layer cake 3D instantaneous velocity model that considers both the initial velocity (v0) in every reference horizon and the gradient of velocity variation with depth (k). Using this method it is possible to consider the geological constraint given by the geometries of the horizons and the geo-statistical approach to the interpolation of velocities and gradient. Here we present an experiment based on the use of set of pseudo-wells obtained from the

  15. Assessment of isometric muscle strength and rate of torque development with hand-held dynamometry: Test-retest reliability and relationship with gait velocity after stroke.

    Science.gov (United States)

    Mentiplay, Benjamin F; Tan, Dawn; Williams, Gavin; Adair, Brooke; Pua, Yong-Hao; Bower, Kelly J; Clark, Ross A

    2018-04-27

    Isometric rate of torque development examines how quickly force can be exerted and may resemble everyday task demands more closely than isometric strength. Rate of torque development may provide further insight into the relationship between muscle function and gait following stroke. Aims of this study were to examine the test-retest reliability of hand-held dynamometry to measure isometric rate of torque development following stroke, to examine associations between strength and rate of torque development, and to compare the relationships of strength and rate of torque development to gait velocity. Sixty-three post-stroke adults participated (60 years, 34 male). Gait velocity was assessed using the fast-paced 10 m walk test. Isometric strength and rate of torque development of seven lower-limb muscle groups were assessed with hand-held dynamometry. Intraclass correlation coefficients were calculated for reliability and Spearman's rho correlations were calculated for associations. Regression analyses using partial F-tests were used to compare strength and rate of torque development in their relationship with gait velocity. Good to excellent reliability was shown for strength and rate of torque development (0.82-0.97). Strong associations were found between strength and rate of torque development (0.71-0.94). Despite high correlations between strength and rate of torque development, rate of torque development failed to provide significant value to regression models that already contained strength. Assessment of isometric rate of torque development with hand-held dynamometry is reliable following stroke, however isometric strength demonstrated greater relationships with gait velocity. Further research should examine the relationship between dynamic measures of muscle strength/torque and gait after stroke. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Modelling low velocity impact induced damage in composite laminates

    Science.gov (United States)

    Shi, Yu; Soutis, Constantinos

    2017-12-01

    The paper presents recent progress on modelling low velocity impact induced damage in fibre reinforced composite laminates. It is important to understand the mechanisms of barely visible impact damage (BVID) and how it affects structural performance. To reduce labour intensive testing, the development of finite element (FE) techniques for simulating impact damage becomes essential and recent effort by the composites research community is reviewed in this work. The FE predicted damage initiation and propagation can be validated by Non Destructive Techniques (NDT) that gives confidence to the developed numerical damage models. A reliable damage simulation can assist the design process to optimise laminate configurations, reduce weight and improve performance of components and structures used in aircraft construction.

  17. Validity and Reliability of a Wearable Inertial Sensor to Measure Velocity and Power in the Back Squat and Bench Press.

    Science.gov (United States)

    Orange, Samuel T; Metcalfe, James W; Liefeith, Andreas; Marshall, Phil; Madden, Leigh A; Fewster, Connor R; Vince, Rebecca V

    2018-05-08

    Orange, ST, Metcalfe, JW, Liefeith, A, Marshall, P, Madden, LA, Fewster, CR, and Vince, RV. Validity and reliability of a wearable inertial sensor to measure velocity and power in the back squat and bench press. J Strength Cond Res XX(X): 000-000, 2018-This study examined the validity and reliability of a wearable inertial sensor to measure velocity and power in the free-weight back squat and bench press. Twenty-nine youth rugby league players (18 ± 1 years) completed 2 test-retest sessions for the back squat followed by 2 test-retest sessions for the bench press. Repetitions were performed at 20, 40, 60, 80, and 90% of 1 repetition maximum (1RM) with mean velocity, peak velocity, mean power (MP), and peak power (PP) simultaneously measured using an inertial sensor (PUSH) and a linear position transducer (GymAware PowerTool). The PUSH demonstrated good validity (Pearson's product-moment correlation coefficient [r]) and reliability (intraclass correlation coefficient [ICC]) only for measurements of MP (r = 0.91; ICC = 0.83) and PP (r = 0.90; ICC = 0.80) at 20% of 1RM in the back squat. However, it may be more appropriate for athletes to jump off the ground with this load to optimize power output. Further research should therefore evaluate the usability of inertial sensors in the jump squat exercise. In the bench press, good validity and reliability were evident only for the measurement of MP at 40% of 1RM (r = 0.89; ICC = 0.83). The PUSH was unable to provide a valid and reliable estimate of any other criterion variable in either exercise. Practitioners must be cognizant of the measurement error when using inertial sensor technology to quantify velocity and power during resistance training, particularly with loads other than 20% of 1RM in the back squat and 40% of 1RM in the bench press.

  18. Optimal velocity difference model for a car-following theory

    International Nuclear Information System (INIS)

    Peng, G.H.; Cai, X.H.; Liu, C.Q.; Cao, B.F.; Tuo, M.X.

    2011-01-01

    In this Letter, we present a new optimal velocity difference model for a car-following theory based on the full velocity difference model. The linear stability condition of the new model is obtained by using the linear stability theory. The unrealistically high deceleration does not appear in OVDM. Numerical simulation of traffic dynamics shows that the new model can avoid the disadvantage of negative velocity occurred at small sensitivity coefficient λ in full velocity difference model by adjusting the coefficient of the optimal velocity difference, which shows that collision can disappear in the improved model. -- Highlights: → A new optimal velocity difference car-following model is proposed. → The effects of the optimal velocity difference on the stability of traffic flow have been explored. → The starting and braking process were carried out through simulation. → The effects of the optimal velocity difference can avoid the disadvantage of negative velocity.

  19. Cardiac magnetic resonance: is phonocardiogram gating reliable in velocity-encoded phase contrast imaging?

    International Nuclear Information System (INIS)

    Nassenstein, Kai; Schlosser, Thomas; Orzada, Stephan; Ladd, Mark E.; Maderwald, Stefan; Haering, Lars; Czylwik, Andreas; Jensen, Christoph; Bruder, Oliver

    2012-01-01

    To assess the diagnostic accuracy of phonocardiogram (PCG) gated velocity-encoded phase contrast magnetic resonance imaging (MRI). Flow quantification above the aortic valve was performed in 68 patients by acquiring a retrospectively PCG- and a retrospectively ECG-gated velocity-encoded GE-sequence at 1.5 T. Peak velocity (PV), average velocity (AV), forward volume (FV), reverse volume (RV), net forward volume (NFV), as well as the regurgitant fraction (RF) were assessed for both datasets, as well as for the PCG-gated datasets after compensation for the PCG trigger delay. PCG-gated image acquisition was feasible in 64 patients, ECG-gated in all patients. PCG-gated flow quantification overestimated PV (Δ 3.8 ± 14.1 cm/s; P = 0.037) and underestimated FV (Δ -4.9 ± 15.7 ml; P = 0.015) and NFV (Δ -4.5 ± 16.5 ml; P = 0.033) compared with ECG-gated imaging. After compensation for the PCG trigger delay, differences were only observed for PV (Δ 3.8 ± 14.1 cm/s; P = 0.037). Wide limits of agreement between PCG- and ECG-gated flow quantification were observed for all variables (PV: -23.9 to 31.4 cm/s; AV: -4.5 to 3.9 cm/s; FV: -35.6 to 25.9 ml; RV: -8.0 to 7.2 ml; NFV: -36.8 to 27.8 ml; RF: -10.4 to 10.2 %). The present study demonstrates that PCG gating in its current form is not reliable enough for flow quantification based on velocity-encoded phase contrast gradient echo (GE) sequences. (orig.)

  20. A new car-following model considering velocity anticipation

    International Nuclear Information System (INIS)

    Jun-Fang, Tian; Bin, Jia; Xin-Gang, Li; Zi-You, Gao

    2010-01-01

    The full velocity difference model proposed by Jiang et al. [2001 Phys. Rev. E 64 017101] has been improved by introducing velocity anticipation. Velocity anticipation means the follower estimates the future velocity of the leader. The stability condition of the new model is obtained by using the linear stability theory. Theoretical results show that the stability region increases when we increase the anticipation time interval. The mKdV equation is derived to describe the kink–antikink soliton wave and obtain the coexisting stability line. The delay time of car motion and kinematic wave speed at jam density are obtained in this model. Numerical simulations exhibit that when we increase the anticipation time interval enough, the new model could avoid accidents under urgent braking cases. Also, the traffic jam could be suppressed by considering the anticipation velocity. All results demonstrate that this model is an improvement on the full velocity difference model. (general)

  1. Three dimensional reflection velocity analysis based on velocity model scan; Model scan ni yoru sanjigen hanshaha sokudo kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Minegishi, M; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)

    1996-05-01

    Introduced herein is a reflection wave velocity analysis method using model scanning as a method for velocity estimation across a section, the estimation being useful in the construction of a velocity structure model in seismic exploration. In this method, a stripping type analysis is carried out, wherein optimum structure parameters are determined for reflection waves one after the other beginning with those from shallower parts. During this process, the velocity structures previously determined for the shallower parts are fixed and only the lowest of the layers undergoing analysis at the time is subjected to model scanning. To consider the bending of ray paths at each velocity boundaries involving shallower parts, the ray path tracing method is utilized for the calculation of the reflection travel time curve for the reflection surface being analyzed. Out of the reflection wave travel time curves calculated using various velocity structure models, one that suits best the actual reflection travel time is detected. The degree of matching between the calculated result and actual result is measured by use of data semblance in a time window provided centering about the calculated reflective wave travel time. The structure parameter is estimated on the basis of conditions for the maximum semblance. 1 ref., 4 figs.

  2. Validity and reliability of simple measurement device to assess the velocity of the barbell during squats.

    Science.gov (United States)

    Lorenzetti, Silvio; Lamparter, Thomas; Lüthy, Fabian

    2017-12-06

    The velocity of a barbell can provide important insights on the performance of athletes during strength training. The aim of this work was to assess the validity and reliably of four simple measurement devices that were compared to 3D motion capture measurements during squatting. Nine participants were assessed when performing 2 × 5 traditional squats with a weight of 70% of the 1 repetition maximum and ballistic squats with a weight of 25 kg. Simultaneously, data was recorded from three linear position transducers (T-FORCE, Tendo Power and GymAware), an accelerometer based system (Myotest) and a 3D motion capture system (Vicon) as the Gold Standard. Correlations between the simple measurement devices and 3D motion capture of the mean and the maximal velocity of the barbell, as well as the time to maximal velocity, were calculated. The correlations during traditional squats were significant and very high (r = 0.932, 0.990, p squats and was less accurate. All the linear position transducers were able to assess squat performance, particularly during traditional squats and especially in terms of mean velocity and time to maximal velocity.

  3. The Limit Deposit Velocity model, a new approach

    Directory of Open Access Journals (Sweden)

    Miedema Sape A.

    2015-12-01

    Full Text Available In slurry transport of settling slurries in Newtonian fluids, it is often stated that one should apply a line speed above a critical velocity, because blow this critical velocity there is the danger of plugging the line. There are many definitions and names for this critical velocity. It is referred to as the velocity where a bed starts sliding or the velocity above which there is no stationary bed or sliding bed. Others use the velocity where the hydraulic gradient is at a minimum, because of the minimum energy consumption. Most models from literature are one term one equation models, based on the idea that the critical velocity can be explained that way.

  4. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  5. Auditory velocity discrimination in the horizontal plane at very high velocities.

    Science.gov (United States)

    Frissen, Ilja; Féron, François-Xavier; Guastavino, Catherine

    2014-10-01

    We determined velocity discrimination thresholds and Weber fractions for sounds revolving around the listener at very high velocities. Sounds used were a broadband white noise and two harmonic sounds with fundamental frequencies of 330 Hz and 1760 Hz. Experiment 1 used velocities ranging between 288°/s and 720°/s in an acoustically treated room and Experiment 2 used velocities between 288°/s and 576°/s in a highly reverberant hall. A third experiment addressed potential confounds in the first two experiments. The results show that people can reliably discriminate velocity at very high velocities and that both thresholds and Weber fractions decrease as velocity increases. These results violate Weber's law but are consistent with the empirical trend observed in the literature. While thresholds for the noise and 330 Hz harmonic stimulus were similar, those for the 1760 Hz harmonic stimulus were substantially higher. There were no reliable differences in velocity discrimination between the two acoustical environments, suggesting that auditory motion perception at high velocities is robust against the effects of reverberation. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Test-retest reliability of knee extensor rate of velocity and power development in older adults using the isotonic mode on a Biodex System 3 dynamometer.

    Science.gov (United States)

    Van Driessche, Stijn; Van Roie, Evelien; Vanwanseele, Benedicte; Delecluse, Christophe

    2018-01-01

    Isotonic testing and measures of rapid power production are emerging as functionally relevant test methods for detection of muscle aging. Our objective was to assess reliability of rapid velocity and power measures in older adults using the isotonic mode of an isokinetic dynamometer. Sixty-three participants (aged 65 to 82 years) underwent a test-retest protocol with one week time interval. Isotonic knee extension tests were performed at four different loads: 0%, 25%, 50% and 75% of maximal isometric strength. Peak velocity (pV) and power (pP) were determined as the highest values of the velocity and power curve. Rate of velocity (RVD) and power development (RPD) were calculated as the linear slopes of the velocity- and power-time curve. Relative and absolute measures of test-retest reliability were analyzed using intraclass correlation coefficients (ICC), standard error of measurement (SEM) and Bland-Altman analyses. Overall, reliability was high for pV, pP, RVD and RPD at 0%, 25% and 50% load (ICC: .85 - .98, SEM: 3% - 10%). A trend for increased reliability at lower loads seemed apparent. The tests at 75% load led to range of motion failure and should be avoided. In addition, results demonstrated that caution is advised when interpreting early phase results (first 50ms). To conclude, our results support the use of the isotonic mode of an isokinetic dynamometer for testing rapid power and velocity characteristics in older adults, which is of high clinical relevance given that these muscle characteristics are emerging as the primary outcomes for preventive and rehabilitative interventions in aging research.

  7. Estimating the Wet-Rock P-Wave Velocity from the Dry-Rock P-Wave Velocity for Pyroclastic Rocks

    Science.gov (United States)

    Kahraman, Sair; Fener, Mustafa; Kilic, Cumhur Ozcan

    2017-07-01

    Seismic methods are widely used for the geotechnical investigations in volcanic areas or for the determination of the engineering properties of pyroclastic rocks in laboratory. Therefore, developing a relation between the wet- and dry-rock P-wave velocities will be helpful for engineers when evaluating the formation characteristics of pyroclastic rocks. To investigate the predictability of the wet-rock P-wave velocity from the dry-rock P-wave velocity for pyroclastic rocks P-wave velocity measurements were conducted on 27 different pyroclastic rocks. In addition, dry-rock S-wave velocity measurements were conducted. The test results were modeled using Gassmann's and Wood's theories and it was seen that estimates for saturated P-wave velocity from the theories fit well measured data. For samples having values of less and greater than 20%, practical equations were derived for reliably estimating wet-rock P-wave velocity as function of dry-rock P-wave velocity.

  8. Development of vortex model with realistic axial velocity distribution

    International Nuclear Information System (INIS)

    Ito, Kei; Ezure, Toshiki; Ohshima, Hiroyuki

    2014-01-01

    A vortex is considered as one of significant phenomena which may cause gas entrainment (GE) and/or vortex cavitation in sodium-cooled fast reactors. In our past studies, the vortex is assumed to be approximated by the well-known Burgers vortex model. However, the Burgers vortex model has a simple but unreal assumption that the axial velocity component is horizontally constant, while in real the free surface vortex has the axial velocity distribution which shows large gradient in radial direction near the vortex center. In this study, a new vortex model with realistic axial velocity distribution is proposed. This model is derived from the steady axisymmetric Navier-Stokes equation as well as the Burgers vortex model, but the realistic axial velocity distribution in radial direction is considered, which is defined to be zero at the vortex center and to approach asymptotically to zero at infinity. As the verification, the new vortex model is applied to the evaluation of a simple vortex experiment, and shows good agreements with the experimental data in terms of the circumferential velocity distribution and the free surface shape. In addition, it is confirmed that the Burgers vortex model fails to calculate accurate velocity distribution with the assumption of uniform axial velocity. However, the calculation accuracy of the Burgers vortex model can be enhanced close to that of the new vortex model in consideration of the effective axial velocity which is calculated as the average value only in the vicinity of the vortex center. (author)

  9. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  10. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  11. Shallow velocity model in the area of Pozzo Pitarrone, Mt. Etna, from single station, array methods and borehole data

    Directory of Open Access Journals (Sweden)

    Luciano Zuccarello

    2016-09-01

    Full Text Available Seismic noise recorded by a temporary array installed around Pozzo Pitarrone, NE flank of Mt. Etna, have been analysed with several techniques. Single station HVSR method and SPAC array method have been applied to stationary seismic noise to investigate the local shallow structure. The inversion of dispersion curves produced a shear wave velocity model of the area reliable down to depth of about 130 m. A comparison of such model with the stratigraphic information available for the investigated area shows a good qualitative agreement. Taking advantage of a borehole station installed at 130 m depth, we could estimate also the P-wave velocity by comparing the borehole recordings of local earthquakes with the same event recorded at surface. Further insight on the P-wave velocity in the upper 130 m layer comes from the surface reflected wave observable in some cases at the borehole station. From this analysis we obtained an average P-wave velocity of about 1.2 km/s, compatible with the shear wave velocity found from the analysis of seismic noise.

  12. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes

    Science.gov (United States)

    Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.

    2013-01-01

    Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.

  13. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  14. Uncertainty estimation of the velocity model for stations of the TrigNet GPS network

    Science.gov (United States)

    Hackl, M.; Malservisi, R.; Hugentobler, U.

    2010-12-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that error models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is computationally expensive and is usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies, which allows for a reliable estimation of the velocity error. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Small differences may originate from non-normal distribution of the noise.

  15. A classical model explaining the OPERA velocity paradox

    CERN Document Server

    Broda, Boguslaw

    2011-01-01

    In the context of the paradoxical results of the OPERA Collaboration, we have proposed a classical mechanics model yielding the statistically measured velocity of a beam higher than the velocity of the particles constituting the beam. Ingredients of our model necessary to obtain this curious result are a non-constant fraction function and the method of the maximum-likelihood estimation.

  16. Reliability of Phase Velocity Measurements of Flexural Acoustic Waves in the Human Tibia In-Vivo.

    Science.gov (United States)

    Vogl, Florian; Schnüriger, Karin; Gerber, Hans; Taylor, William R

    2016-01-01

    Axial-transmission acoustics have shown to be a promising technique to measure individual bone properties and detect bone pathologies. With the ultimate goal being the in-vivo application of such systems, quantification of the key aspects governing the reliability is crucial to bring this method towards clinical use. This work presents a systematic reliability study quantifying the sources of variability and their magnitudes of in-vivo measurements using axial-transmission acoustics. 42 healthy subjects were measured by an experienced operator twice per week, over a four-month period, resulting in over 150000 wave measurements. In a complementary study to assess the influence of different operators performing the measurements, 10 novice operators were trained, and each measured 5 subjects on a single occasion, using the same measurement protocol as in the first part of the study. The estimated standard error for the measurement protocol used to collect the study data was ∼ 17 m/s (∼ 4% of the grand mean) and the index of dependability, as a measure of reliability, was Φ = 0.81. It was shown that the method is suitable for multi-operator use and that the reliability can be improved efficiently by additional measurements with device repositioning, while additional measurements without repositioning cannot improve the reliability substantially. Phase velocity values were found to be significantly higher in males than in females (p < 10-5) and an intra-class correlation coefficient of r = 0.70 was found between the legs of each subject. The high reliability of this non-invasive approach and its intrinsic sensitivity to mechanical properties opens perspectives for the rapid and inexpensive clinical assessment of bone pathologies, as well as for monitoring programmes without any radiation exposure for the patient.

  17. Improving 1D Site Specific Velocity Profiles for the Kik-Net Network

    Science.gov (United States)

    Holt, James; Edwards, Benjamin; Pilz, Marco; Fäh, Donat; Rietbrock, Andreas

    2017-04-01

    Ground motion predication equations (GMPEs) form the cornerstone of modern seismic hazard assessments. When produced to a high standard they provide reliable estimates of ground motion/spectral acceleration for a given site and earthquake scenario. This information is crucial for engineers to optimise design and for regulators who enforce legal minimum safe design capacities. Classically, GMPEs were built upon the assumption that variability around the median model could be treated as aleatory. As understanding improved, it was noted that the propagation could be segregated into the response of the average path from the source and the response of the site. This is because the heterogeneity of the near-surface lithology is significantly different from that of the bulk path. It was then suggested that the semi-ergodic approach could be taken if the site response could be determined, moving uncertainty away from aleatory to epistemic. The determination of reliable site-specific response models is therefore becoming increasingly critical for ground motion models used in engineering practice. Today it is common practice to include proxies for site response within the scope of a GMPE, such as Vs30 or site classification, in an effort to reduce the overall uncertainty of the predication at a given site. However, these proxies are not always reliable enough to give confident ground motion estimates, due to the complexity of the near-surface. Other approaches of quantifying the response of the site include detailed numerical simulations (1/2/3D - linear, EQL, non-linear etc.). However, in order to be reliable, they require highly detailed and accurate velocity and, for non-linear analyses, material property models. It is possible to obtain this information through invasive methods, but is expensive, and not feasible for most projects. Here we propose an alternative method to derive reliable velocity profiles (and their uncertainty), calibrated using almost 20 years of

  18. A phenomenological retention tank model using settling velocity distributions.

    Science.gov (United States)

    Maruejouls, T; Vanrolleghem, P A; Pelletier, G; Lessard, P

    2012-12-15

    Many authors have observed the influence of the settling velocity distribution on the sedimentation process in retention tanks. However, the pollutants' behaviour in such tanks is not well characterized, especially with respect to their settling velocity distribution. This paper presents a phenomenological modelling study dealing with the way by which the settling velocity distribution of particles in combined sewage changes between entering and leaving an off-line retention tank. The work starts from a previously published model (Lessard and Beck, 1991) which is first implemented in a wastewater management modelling software, to be then tested with full-scale field data for the first time. Next, its performance is improved by integrating the particle settling velocity distribution and adding a description of the resuspension due to pumping for emptying the tank. Finally, the potential of the improved model is demonstrated by comparing the results for one more rain event. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Shallow velocity model in the area of Pozzo Pitarrone, Mt. Etna, from single station, array methods and borehole data.

    OpenAIRE

    Zuccarello, L.; Paratore, M.; Ferrari, F.; Messina, A.; Branca, S.; Contrafatto, D.; Galluzzo, D.; Rapisarda, S.; La Rocca, M.

    2016-01-01

    Seismic noise recorded by a temporary array installed around Pozzo Pitarrone, NE flank of Mt. Etna, have been analysed with several techniques. Single station HVSR method and SPAC array method have been applied to stationary seismic noise to investigate the local shallow structure. The inversion of dispersion curves produced a shear wave velocity model of the area reliable down to depth of about 130 m. A comparison of such model with the stratigraphic information available for the investigate...

  20. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  1. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  2. UCVM: An Open Source Framework for 3D Velocity Model Research

    Science.gov (United States)

    Gill, D.; Maechling, P. J.; Jordan, T. H.; Plesch, A.; Taborda, R.; Callaghan, S.; Small, P.

    2013-12-01

    Three-dimensional (3D) seismic velocity models provide fundamental input data to ground motion simulations, in the form of structured or unstructured meshes or grids. Numerous models are available for California, as well as for other parts of the United States and Europe, but models do not share a common interface. Being able to interact with these models in a standardized way is critical in order to configure and run 3D ground motion simulations. The Unified Community Velocity Model (UCVM) software, developed by researchers at the Southern California Earthquake Center (SCEC), is an open source framework designed to provide a cohesive way to interact with seismic velocity models. We describe the several ways in which we have improved the UCVM software over the last year. We have simplified the UCVM installation process by automating the installation of various community codebases, improving the ease of use.. We discuss how UCVM software was used to build velocity meshes for high-frequency (4Hz) deterministic 3D wave propagation simulations, and how the UCVM framework interacts with other open source resources, such as NetCDF file formats for visualization. The UCVM software uses a layered software architecture that transparently converts geographic coordinates to the coordinate systems used by the underlying velocity models and supports inclusion of a configurable near-surface geotechnical layer, while interacting with the velocity model codes through their existing software interfaces. No changes to the velocity model codes are required. Our recent UCVM installation improvements bundle UCVM with a setup script, written in Python, which guides users through the process that installs the UCVM software along with all the user-selectable velocity models. Each velocity model is converted into a standardized (configure, make, make install) format that is easily downloaded and installed via the script. UCVM is often run in specialized high performance computing (HPC

  3. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  4. A nonlinear inversion for the velocity background and perturbation models

    KAUST Repository

    Wu, Zedong

    2015-08-19

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI) by inverting for the single scattered wavefield obtained using an image. However, current RWI methods usually neglect diving waves, which is an important source of information for extracting the long wavelength components of the velocity model. Thus, we propose a new optimization problem through breaking the velocity model into the background and the perturbation in the wave equation directly. In this case, the perturbed model is no longer the single scattering model, but includes all scattering. We optimize both components simultaneously, and thus, the objective function is nonlinear with respect to both the background and perturbation. The new introduced w can absorb the non-smooth update of background naturally. Application to the Marmousi model with frequencies that start at 5 Hz shows that this method can converge to the accurate velocity starting from a linearly increasing initial velocity. Application to the SEG2014 demonstrates the versatility of the approach.

  5. Models on reliability of non-destructive testing

    International Nuclear Information System (INIS)

    Simola, K.; Pulkkinen, U.

    1998-01-01

    The reliability of ultrasonic inspections has been studied in e.g. international PISC (Programme for the Inspection of Steel Components) exercises. These exercises have produced a large amount of information on the effect of various factors on the reliability of inspections. The information obtained from reliability experiments are used to model the dependency of flaw detection probability on various factors and to evaluate the performance of inspection equipment, including the sizing accuracy. The information from experiments is utilised in a most effective way when mathematical models are applied. Here, some statistical models for reliability of non-destructive tests are introduced. In order to demonstrate the use of inspection reliability models, they have been applied to the inspection results of intergranular stress corrosion cracking (IGSCC) type flaws in PISC III exercise (PISC 1995). The models are applied to both flaw detection frequency data of all inspection teams and to flaw sizing data of one participating team. (author)

  6. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  7. Evaluation of a Model for Predicting the Tidal Velocity in Fjord Entrances

    Energy Technology Data Exchange (ETDEWEB)

    Lalander, Emilia [The Swedish Centre for Renewable Electric Energy Conversion, Division of Electricity, Uppsala Univ. (Sweden); Thomassen, Paul [Team Ashes, Trondheim (Norway); Leijon, Mats [The Swedish Centre for Renewable Electric Energy Conversion, Division of Electricity, Uppsala Univ. (Sweden)

    2013-04-15

    Sufficiently accurate and low-cost estimation of tidal velocities is of importance when evaluating a potential site for a tidal energy farm. Here we suggest and evaluate a model to calculate the tidal velocity in fjord entrances. The model is compared with tidal velocities from Acoustic Doppler Current Profiler (ADCP) measurements in the tidal channel Skarpsundet in Norway. The calculated velocity value from the model corresponded well with the measured cross-sectional average velocity, but was shown to underestimate the velocity in the centre of the channel. The effect of this was quantified by calculating the kinetic energy of the flow for a 14-day period. A numerical simulation using TELEMAC-2D was performed and validated with ADCP measurements. Velocity data from the simulation was used as input for calculating the kinetic energy at various locations in the channel. It was concluded that the model presented here is not accurate enough for assessing the tidal energy resource. However, the simplicity of the model was considered promising in the use of finding sites where further analyses can be made.

  8. Results of verification and investigation of wind velocity field forecast. Verification of wind velocity field forecast model

    International Nuclear Information System (INIS)

    Ogawa, Takeshi; Kayano, Mitsunaga; Kikuchi, Hideo; Abe, Takeo; Saga, Kyoji

    1995-01-01

    In Environmental Radioactivity Research Institute, the verification and investigation of the wind velocity field forecast model 'EXPRESS-1' have been carried out since 1991. In fiscal year 1994, as the general analysis, the validity of weather observation data, the local features of wind field, and the validity of the positions of monitoring stations were investigated. The EXPRESS which adopted 500 m mesh so far was improved to 250 m mesh, and the heightening of forecast accuracy was examined, and the comparison with another wind velocity field forecast model 'SPEEDI' was carried out. As the results, there are the places where the correlation with other points of measurement is high and low, and it was found that for the forecast of wind velocity field, by excluding the data of the points with low correlation or installing simplified observation stations to take their data in, the forecast accuracy is improved. The outline of the investigation, the general analysis of weather observation data and the improvements of wind velocity field forecast model and forecast accuracy are reported. (K.I.)

  9. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...

  10. Multijam Solutions in Traffic Models with Velocity-Dependent Driver Strategies

    DEFF Research Database (Denmark)

    Carter, Paul; Christiansen, Peter Leth; Gaididei, Yuri B.

    2014-01-01

    The optimal-velocity follow-the-leader model is augmented with an equation that allows each driver to adjust their target headway according to the velocity difference between the driver and the car in front. In this more detailed model, which is investigated on a ring, stable and unstable multipu...

  11. A generic model for the shallow velocity structure of volcanoes

    Science.gov (United States)

    Lesage, Philippe; Heap, Michael J.; Kushnir, Alexandra

    2018-05-01

    The knowledge of the structure of volcanoes and of the physical properties of volcanic rocks is of paramount importance to the understanding of volcanic processes and the interpretation of monitoring observations. However, the determination of these structures by geophysical methods suffers limitations including a lack of resolution and poor precision. Laboratory experiments provide complementary information on the physical properties of volcanic materials and their behavior as a function of several parameters including pressure and temperature. Nevertheless combined studies and comparisons of field-based geophysical and laboratory-based physical approaches remain scant in the literature. Here, we present a meta-analysis which compares 44 seismic velocity models of the shallow structure of eleven volcanoes, laboratory velocity measurements on about one hundred rock samples from five volcanoes, and seismic well-logs from deep boreholes at two volcanoes. The comparison of these measurements confirms the strong variability of P- and S-wave velocities, which reflects the diversity of volcanic materials. The values obtained from laboratory experiments are systematically larger than those provided by seismic models. This discrepancy mainly results from scaling problems due to the difference between the sampled volumes. The averages of the seismic models are characterized by very low velocities at the surface and a strong velocity increase at shallow depth. By adjusting analytical functions to these averages, we define a generic model that can describe the variations in P- and S-wave velocities in the first 500 m of andesitic and basaltic volcanoes. This model can be used for volcanoes where no structural information is available. The model can also account for site time correction in hypocenter determination as well as for site and path effects that are commonly observed in volcanic structures.

  12. Validity and reliability of a novel iPhone app for the measurement of barbell velocity and 1RM on the bench-press exercise

    OpenAIRE

    Balsalobre Fernández, Carlos; Marchante Domingo, David; Muñoz López, Mario; Jiménez Sáiz, Sergio Lorenzo

    2018-01-01

    The purpose of this study was to analyse the validity and reliability of a novel iPhone app (named: PowerLift) for the measurement of mean velocity on the bench-press exercise. Additionally, the accuracy of the estimation of the 1-Repetition maximum (1RM) using the load-velocity relationship was tested. To do this, 10 powerlifters (Mean (SD): age = 26.5 ± 6.5 years; bench press 1RM · kg-1 = 1.34 ± 0.25) completed an incremental test on the bench-press exercise with 5 different loads (75-100% ...

  13. Estimation of spatial uncertainties of tomographic velocity models

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, M.; Du, Z.; Querendez, E. [SINTEF Petroleum Research, Trondheim (Norway)

    2012-12-15

    This research project aims to evaluate the possibility of assessing the spatial uncertainties in tomographic velocity model building in a quantitative way. The project is intended to serve as a test of whether accurate and specific uncertainty estimates (e.g., in meters) can be obtained. The project is based on Monte Carlo-type perturbations of the velocity model as obtained from the tomographic inversion guided by diagonal and off-diagonal elements of the resolution and the covariance matrices. The implementation and testing of this method was based on the SINTEF in-house stereotomography code, using small synthetic 2D data sets. To test the method the calculation and output of the covariance and resolution matrices was implemented, and software to perform the error estimation was created. The work included the creation of 2D synthetic data sets, the implementation and testing of the software to conduct the tests (output of the covariance and resolution matrices which are not implicitly provided by stereotomography), application to synthetic data sets, analysis of the test results, and creating the final report. The results show that this method can be used to estimate the spatial errors in tomographic images quantitatively. The results agree with' the known errors for our synthetic models. However, the method can only be applied to structures in the model, where the change of seismic velocity is larger than the predicted error of the velocity parameter amplitudes. In addition, the analysis is dependent on the tomographic method, e.g., regularization and parameterization. The conducted tests were very successful and we believe that this method could be developed further to be applied to third party tomographic images.

  14. An Extended Optimal Velocity Model with Consideration of Honk Effect

    International Nuclear Information System (INIS)

    Tang Tieqiao; Li Chuanyao; Huang Haijun; Shang Huayan

    2010-01-01

    Based on the OV (optimal velocity) model, we in this paper present an extended OV model with the consideration of the honk effect. The analytical and numerical results illustrate that the honk effect can improve the velocity and flow of uniform flow but that the increments are relevant to the density. (interdisciplinary physics and related areas of science and technology)

  15. Reliability models for Space Station power system

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kim, Y.; Wagner, H.

    1987-01-01

    This paper presents a methodology for the reliability evaluation of Space Station power system. The two options considered are the photovoltaic system and the solar dynamic system. Reliability models for both of these options are described along with the methodology for calculating the reliability indices.

  16. Reliability Model of Power Transformer with ONAN Cooling

    OpenAIRE

    M. Sefidgaran; M. Mirzaie; A. Ebrahimzadeh

    2010-01-01

    Reliability of a power system is considerably influenced by its equipments. Power transformers are one of the most critical and expensive equipments of a power system and their proper functions are vital for the substations and utilities. Therefore, reliability model of power transformer is very important in the risk assessment of the engineering systems. This model shows the characteristics and functions of a transformer in the power system. In this paper the reliability model...

  17. Reliability of multi-model and structurally different single-model ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Yokohata, Tokuta [National Institute for Environmental Studies, Center for Global Environmental Research, Tsukuba, Ibaraki (Japan); Annan, James D.; Hargreaves, Julia C. [Japan Agency for Marine-Earth Science and Technology, Research Institute for Global Change, Yokohama, Kanagawa (Japan); Collins, Matthew [University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter (United Kingdom); Jackson, Charles S.; Tobis, Michael [The University of Texas at Austin, Institute of Geophysics, 10100 Burnet Rd., ROC-196, Mail Code R2200, Austin, TX (United States); Webb, Mark J. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-08-15

    The performance of several state-of-the-art climate model ensembles, including two multi-model ensembles (MMEs) and four structurally different (perturbed parameter) single model ensembles (SMEs), are investigated for the first time using the rank histogram approach. In this method, the reliability of a model ensemble is evaluated from the point of view of whether the observations can be regarded as being sampled from the ensemble. Our analysis reveals that, in the MMEs, the climate variables we investigated are broadly reliable on the global scale, with a tendency towards overdispersion. On the other hand, in the SMEs, the reliability differs depending on the ensemble and variable field considered. In general, the mean state and historical trend of surface air temperature, and mean state of precipitation are reliable in the SMEs. However, variables such as sea level pressure or top-of-atmosphere clear-sky shortwave radiation do not cover a sufficiently wide range in some. It is not possible to assess whether this is a fundamental feature of SMEs generated with particular model, or a consequence of the algorithm used to select and perturb the values of the parameters. As under-dispersion is a potentially more serious issue when using ensembles to make projections, we recommend the application of rank histograms to assess reliability when designing and running perturbed physics SMEs. (orig.)

  18. Angular velocity determination of spinning solar sails using only a sun sensor

    Directory of Open Access Journals (Sweden)

    Kun Zhai

    2017-02-01

    Full Text Available The direction of the sun is the easiest and most reliable observation vector for a solar sail running in deep space exploration. This paper presents a new method using only raw measurements of the sun direction vector to estimate angular velocity for a spinning solar sail. In cases with a constant spin angular velocity, the estimation equation is formed based on the kinematic model for the apparent motion of the sun direction vector; the least-squares solution is then easily calculated. A performance criterion is defined and used to analyze estimation accuracy. In cases with a variable spin angular velocity, the estimation equation is developed based on the kinematic model for the apparent motion of the sun direction vector and the attitude dynamics equation. Simulation results show that the proposed method can quickly yield high-precision angular velocity estimates that are insensitive to certain measurement noises and modeling errors.

  19. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  20. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  1. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  2. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  3. A new settling velocity model to describe secondary sedimentation.

    Science.gov (United States)

    Ramin, Elham; Wágner, Dorottya S; Yde, Lars; Binning, Philip J; Rasmussen, Michael R; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-12-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM(ZS). In addition, correlations between the Herschel-Bulkley rheological model parameters and sludge concentration were identified with data from batch rheological experiments. A 2-D axisymmetric CFD model of a circular SST containing the new settling velocity and rheological model was validated with full-scale measurements. Finally, it was shown that the representation of compression settling in the CFD model can significantly influence the prediction of sludge distribution in the SSTs under dry- and wet-weather flow conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Car Deceleration Considering Its Own Velocity in Cellular Automata Model

    International Nuclear Information System (INIS)

    Li Keping

    2006-01-01

    In this paper, we propose a new cellular automaton model, which is based on NaSch traffic model. In our method, when a car has a larger velocity, if the gap between the car and its leading car is not enough large, it will decrease. The aim is that the following car has a buffer space to decrease its velocity at the next time, and then avoid to decelerate too high. The simulation results show that using our model, the car deceleration is realistic, and is closer to the field measure than that of NaSch model.

  5. An extended continuum model considering optimal velocity change with memory and numerical tests

    Science.gov (United States)

    Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng

    2018-01-01

    In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.

  6. Delayed hydride cracking: theoretical model testing to predict cracking velocity

    International Nuclear Information System (INIS)

    Mieza, Juan I.; Vigna, Gustavo L.; Domizzi, Gladys

    2009-01-01

    Pressure tubes from Candu nuclear reactors as any other component manufactured with Zr alloys are prone to delayed hydride cracking. That is why it is important to be able to predict the cracking velocity during the component lifetime from parameters easy to be measured, such as: hydrogen concentration, mechanical and microstructural properties. Two of the theoretical models reported in literature to calculate the DHC velocity were chosen and combined, and using the appropriate variables allowed a comparison with experimental results of samples from Zr-2.5 Nb tubes with different mechanical and structural properties. In addition, velocities measured by other authors in irradiated materials could be reproduced using the model described above. (author)

  7. Shallow and deep crustal velocity models of Northeast Tibet

    Science.gov (United States)

    Karplus, M.; Klemperer, S. L.; Mechie, J.; Shi, D.; Zhao, W.; Brown, L. D.; Wu, Z.

    2009-12-01

    The INDEPTH IV seismic profile in Northeast Tibet is the highest resolution wide-angle refraction experiment imaging the Qaidam Basin, North Kunlun Thrusts (NKT), Kunlun Mountains, North and South Kunlun Faults (NKT, SKT), and Songpan-Ganzi terrane (SG). First arrival refraction modeling using ray tracing and least squares inversion has yielded a crustal p-wave velocity model, best resolved for the top 20 km. Ray tracing of deeper reflections shows considerable differences between the Qaidam Basin and the SG, in agreement with previous studies of those areas. The Moho ranges from about 52 km beneath the Qaidam Basin to 63 km with a slight northward dip beneath the SG. The 11-km change must occur between the SKF and the southern edge of the Qaidam Basin, just north of the NKT, allowing the possibility of a Moho step across the NKT. The Qaidam Basin velocity-versus-depth profile is more similar to the global average than the SG profile, which bears resemblance to previously determined “Tibet-type” velocity profiles with mid to lower crustal velocities of 6.5 to 7.0 km/s appearing at greater depths. The highest resolution portion of the profile (100-m instrument spacing) features two distinct, apparently south-dipping low-velocity zones reaching about 2-3 km depth that we infer to be the locations of the NKF and SKF. A strong reflector at 35 km, located entirely south of the SKF and truncated just south of it, may be cut by a steeply south-dipping SKF. Elevated velocities at depth beneath the surface location of the NKF may indicate the south-dipping NKF meets the SKF between depths of 5 and 10 km. Undulating regions of high and low velocity extending about 1-2 km in depth near the southern border of the Qaidam Basin likely represent north-verging thrust sheets of the NKT.

  8. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  9. Stochastic models in reliability and maintenance

    CERN Document Server

    2002-01-01

    Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main­ tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...

  10. Modeling continuous seismic velocity changes due to ground shaking in Chile

    Science.gov (United States)

    Gassenmeier, Martina; Richter, Tom; Sens-Schönfelder, Christoph; Korn, Michael; Tilmann, Frederik

    2015-04-01

    In order to investigate temporal seismic velocity changes due to earthquake related processes and environmental forcing, we analyze 8 years of ambient seismic noise recorded by the Integrated Plate Boundary Observatory Chile (IPOC) network in northern Chile between 18° and 25° S. The Mw 7.7 Tocopilla earthquake in 2007 and the Mw 8.1 Iquique earthquake in 2014 as well as numerous smaller events occurred in this area. By autocorrelation of the ambient seismic noise field, approximations of the Green's functions are retrieved. The recovered function represents backscattered or multiply scattered energy from the immediate neighborhood of the station. To detect relative changes of the seismic velocities we apply the stretching method, which compares individual autocorrelation functions to stretched or compressed versions of a long term averaged reference autocorrelation function. We use time windows in the coda of the autocorrelations, that contain scattered waves which are highly sensitive to minute changes in the velocity. At station PATCX we observe seasonal changes in seismic velocity as well as temporary velocity reductions in the frequency range of 4-6 Hz. The seasonal changes can be attributed to thermal stress changes in the subsurface related to variations of the atmospheric temperature. This effect can be modeled well by a sine curve and is subtracted for further analysis of short term variations. Temporary velocity reductions occur at the time of ground shaking usually caused by earthquakes and are followed by a recovery. We present an empirical model that describes the seismic velocity variations based on continuous observations of the local ground acceleration. Our hypothesis is that not only the shaking of earthquakes provokes velocity drops, but any small vibrations continuously induce minor velocity variations that are immediately compensated by healing in the steady state. We show that the shaking effect is accumulated over time and best described by

  11. A possibilistic uncertainty model in classical reliability theory

    International Nuclear Information System (INIS)

    De Cooman, G.; Capelle, B.

    1994-01-01

    The authors argue that a possibilistic uncertainty model can be used to represent linguistic uncertainty about the states of a system and of its components. Furthermore, the basic properties of the application of this model to classical reliability theory are studied. The notion of the possibilistic reliability of a system or a component is defined. Based on the concept of a binary structure function, the important notion of a possibilistic function is introduced. It allows to calculate the possibilistic reliability of a system in terms of the possibilistic reliabilities of its components

  12. Discharge estimation combining flow routing and occasional measurements of velocity

    Directory of Open Access Journals (Sweden)

    G. Corato

    2011-09-01

    Full Text Available A new procedure is proposed for estimating river discharge hydrographs during flood events, using only water level data at a single gauged site, as well as 1-D shallow water modelling and occasional maximum surface flow velocity measurements. One-dimensional diffusive hydraulic model is used for routing the recorded stage hydrograph in the channel reach considering zero-diffusion downstream boundary condition. Based on synthetic tests concerning a broad prismatic channel, the "suitable" reach length is chosen in order to minimize the effect of the approximated downstream boundary condition on the estimation of the upstream discharge hydrograph. The Manning's roughness coefficient is calibrated by using occasional instantaneous surface velocity measurements during the rising limb of flood that are used to estimate instantaneous discharges by adopting, in the flow area, a two-dimensional velocity distribution model. Several historical events recorded in three gauged sites along the upper Tiber River, wherein reliable rating curves are available, have been used for the validation. The outcomes of the analysis can be summarized as follows: (1 the criterion adopted for selecting the "suitable" channel length based on synthetic test studies has proved to be reliable for field applications to three gauged sites. Indeed, for each event a downstream reach length not more than 500 m is found to be sufficient, for a good performances of the hydraulic model, thereby enabling the drastic reduction of river cross-sections data; (2 the procedure for Manning's roughness coefficient calibration allowed for high performance in discharge estimation just considering the observed water levels and occasional measurements of maximum surface flow velocity during the rising limb of flood. Indeed, errors in the peak discharge magnitude, for the optimal calibration, were found not exceeding 5% for all events observed in the three investigated gauged sections, while the

  13. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  14. Reliability Modeling of Electromechanical System with Meta-Action Chain Methodology

    Directory of Open Access Journals (Sweden)

    Genbao Zhang

    2018-01-01

    Full Text Available To establish a more flexible and accurate reliability model, the reliability modeling and solving algorithm based on the meta-action chain thought are used in this thesis. Instead of estimating the reliability of the whole system only in the standard operating mode, this dissertation adopts the structure chain and the operating action chain for the system reliability modeling. The failure information and structure information for each component are integrated into the model to overcome the given factors applied in the traditional modeling. In the industrial application, there may be different operating modes for a multicomponent system. The meta-action chain methodology can estimate the system reliability under different operating modes by modeling the components with varieties of failure sensitivities. This approach has been identified by computing some electromechanical system cases. The results indicate that the process could improve the system reliability estimation. It is an effective tool to solve the reliability estimation problem in the system under various operating modes.

  15. Critical velocities for deflagration and detonation triggered by voids in a REBO high explosive

    Energy Technology Data Exchange (ETDEWEB)

    Herring, Stuart Davis [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Jensen, Niels G [Los Alamos National Laboratory

    2010-01-01

    The effects of circular voids on the shock sensitivity of a two-dimensional model high explosive crystal are considered. We simulate a piston impact using molecular dynamics simulations with a Reactive Empirical Bond Order (REBO) model potential for a sub-micron, sub-ns exothermic reaction in a diatomic molecular solid. The probability of initiating chemical reactions is found to rise more suddenly with increasing piston velocity for larger voids that collapse more deterministically. A void with radius as small as 10 nm reduces the minimum initiating velocity by a factor of 4. The transition at larger velocities to detonation is studied in a micron-long sample with a single void (and its periodic images). The reaction yield during the shock traversal increases rapidly with velocity, then becomes a prompt, reliable detonation. A void of radius 2.5 nm reduces the critical velocity by 10% from the perfect crystal. A Pop plot of the time-to-detonation at higher velocities shows a characteristic pressure dependence.

  16. Determination of anisotropic velocity model by reflection tomography of compression and shear modes; Determination de modele de vitesse anisotrope par tomographie de reflexion des modes de compression et de cisaillement

    Energy Technology Data Exchange (ETDEWEB)

    Stopin, A.

    2001-12-01

    As the jump from 2D to 3D, seismic exploration lives a new revolution with the use of converted PS waves. Indeed PS converted waves are proving their potential as a tool for imaging through gas; lithology discrimination; structural confirmation; and more. Nevertheless, processing converted shear data and in particular determining accurate P and S velocity models for depth imaging of these data is still a challenging problem, especially when the subsurface is anisotropic. To solve this velocity model determination problem we propose to use reflection travel time tomography. In a first step, we derive a new approximation of the exact phase velocity equation of the SV wave in anisotropic (TI) media. This new approximation is valid for non-weak anisotropy and is mathematically simpler to handle than the exact equation. Then, starting from an isotropic reflection tomography tool developed at Lt-'P, we extend the isotropic bending ray tracing method to the anisotropic case and we implement the quantities necessary for the determination of the anisotropy parameters from the travel time data. Using synthetic data we then study the influence of the different anisotropy parameters on the travel times. From this analysis we propose a methodology to determine a complete anisotropic subsurface model (P and S layer velocities, interface geometries, anisotropy parameters). Finally, on a real data set from the Gulf of Mexico we demonstrate that this new anisotropic reflection tomography tool allows us to obtain a reliable subsurface model yielding kinematically correct and mutually coherent PP and PS images in depth; such a result could not be obtained with an isotropic velocity model. Similar results are obtained on a North Sea data set. (author)

  17. Determination of anisotropic velocity model by reflection tomography of compression and shear modes; Determination de modele de vitesse anisotrope par tomographie de reflexion des modes de compression et de cisaillement

    Energy Technology Data Exchange (ETDEWEB)

    Stopin, A

    2001-12-01

    As the jump from 2D to 3D, seismic exploration lives a new revolution with the use of converted PS waves. Indeed PS converted waves are proving their potential as a tool for imaging through gas; lithology discrimination; structural confirmation; and more. Nevertheless, processing converted shear data and in particular determining accurate P and S velocity models for depth imaging of these data is still a challenging problem, especially when the subsurface is anisotropic. To solve this velocity model determination problem we propose to use reflection travel time tomography. In a first step, we derive a new approximation of the exact phase velocity equation of the SV wave in anisotropic (TI) media. This new approximation is valid for non-weak anisotropy and is mathematically simpler to handle than the exact equation. Then, starting from an isotropic reflection tomography tool developed at Lt-'P, we extend the isotropic bending ray tracing method to the anisotropic case and we implement the quantities necessary for the determination of the anisotropy parameters from the travel time data. Using synthetic data we then study the influence of the different anisotropy parameters on the travel times. From this analysis we propose a methodology to determine a complete anisotropic subsurface model (P and S layer velocities, interface geometries, anisotropy parameters). Finally, on a real data set from the Gulf of Mexico we demonstrate that this new anisotropic reflection tomography tool allows us to obtain a reliable subsurface model yielding kinematically correct and mutually coherent PP and PS images in depth; such a result could not be obtained with an isotropic velocity model. Similar results are obtained on a North Sea data set. (author)

  18. A model relating Eulerian spatial and temporal velocity correlations

    Science.gov (United States)

    Cholemari, Murali R.; Arakeri, Jaywant H.

    2006-03-01

    In this paper we propose a model to relate Eulerian spatial and temporal velocity autocorrelations in homogeneous, isotropic and stationary turbulence. We model the decorrelation as the eddies of various scales becoming decorrelated. This enables us to connect the spatial and temporal separations required for a certain decorrelation through the ‘eddy scale’. Given either the spatial or the temporal velocity correlation, we obtain the ‘eddy scale’ and the rate at which the decorrelation proceeds. This leads to a spatial separation from the temporal correlation and a temporal separation from the spatial correlation, at any given value of the correlation relating the two correlations. We test the model using experimental data from a stationary axisymmetric turbulent flow with homogeneity along the axis.

  19. Evaluation of the Most Reliable Procedure of Determining Jump Height During the Loaded Countermovement Jump Exercise: Take-Off Velocity vs. Flight Time.

    Science.gov (United States)

    Pérez-Castilla, Alejandro; García-Ramos, Amador

    2018-07-01

    Pérez-Castilla, A and García-Ramos, A. Evaluation of the most reliable procedure of determining jump height during the loaded countermovement jump exercise: Take-off velocity vs. flight time. J Strength Cond Res 32(7): 2025-2030, 2018-This study aimed to compare the reliability of jump height between the 2 standard procedures of analyzing force-time data (take-off velocity [TOV] and flight time [FT]) during the loaded countermovement (CMJ) exercise performed with a free-weight barbell and in a Smith machine. The jump height of 17 men (age: 22.2 ± 2.2 years, body mass: 75.2 ± 7.1 kg, and height: 177.0 ± 6.0 cm) was tested in 4 sessions (twice for each CMJ type) against external loads of 17, 30, 45, 60, and 75 kg. Jump height reliability was comparable between the TOV (coefficient of variation [CV]: 6.42 ± 2.41%) and FT (CV: 6.53 ± 2.17%) during the free-weight CMJ, but it was higher for the FT when the CMJ was performed in a Smith machine (CV: 11.34 ± 3.73% for TOV and 5.95 ± 1.12% for FT). Bland-Altman plots revealed trivial differences (≤0.27 cm) and no heteroscedasticity of the errors (R ≤ 0.09) for the jump height obtained by the TOV and FT procedures, whereas the random error between both procedures was higher for the CMJ performed in the Smith machine (2.02 cm) compared with the free-weight barbell (1.26 cm). Based on these results, we recommend the FT procedure to determine jump height during the loaded CMJ performed in a Smith machine, whereas the TOV and FT procedures provide similar reliability during the free-weight CMJ.

  20. Numerical Material Model for Composite Laminates in High-Velocity Impact Simulation

    Directory of Open Access Journals (Sweden)

    Tao Liu

    Full Text Available Abstract A numerical material model for composite laminate, was developed and integrated into the nonlinear dynamic explicit finite element programs as a material user subroutine. This model coupling nonlinear state of equation (EOS, was a macro-mechanics model, which was used to simulate the major mechanical behaviors of composite laminate under high-velocity impact conditions. The basic theoretical framework of the developed material model was introduced. An inverse flyer plate simulation was conducted, which demonstrated the advantage of the developed model in characterizing the nonlinear shock response. The developed model and its implementation were validated through a classic ballistic impact issue, i.e. projectile impacting on Kevlar29/Phenolic laminate. The failure modes and ballistic limit velocity were analyzed, and a good agreement was achieved when comparing with the analytical and experimental results. The computational capacity of this model, for Kevlar/Epoxy laminates with different architectures, i.e. plain-woven and cross-plied laminates, was further evaluated and the residual velocity curves and damage cone were accurately predicted.

  1. Implication of Broadband Dispersion Measurements in Constraining Upper Mantle Velocity Structures

    Science.gov (United States)

    Kuponiyi, A.; Kao, H.; Cassidy, J. F.; Darbyshire, F. A.; Dosso, S. E.; Gosselin, J. M.; Spence, G.

    2017-12-01

    Dispersion measurements from earthquake (EQ) data are traditionally inverted to obtain 1-D shear-wave velocity models, which provide information on deep earth structures. However, in many cases, EQ-derived dispersion measurements lack short-period information, which theoretically should provide details of shallow structures. We show that in at least some cases short-period information, such as can be obtained from ambient seismic noise (ASN) processing, must be combined with EQ dispersion measurements to properly constrain deeper (e.g. upper-mantle) structures. To verify this, synthetic dispersion data are generated using hypothetical velocity models under four scenarios: EQ only (with and without deep low-velocity layers) and combined EQ and ASN data (with and without deep low-velocity layers). The now "broadband" dispersion data are inverted using a trans-dimensional Bayesian framework with the aim of recovering the initial velocity models and assessing uncertainties. Our results show that the deep low-velocity layer could only be recovered from the inversion of the combined ASN-EQ dispersion measurements. Given this result, we proceed to describe a method for obtaining reliable broadband dispersion measurements from both ASN and EQ and show examples for real data. The implication of this study in the characterization of lithospheric and upper mantle structures, such as the Lithosphere-Asthenosphere Boundary (LAB), is also discussed.

  2. Validity and reliability of a novel iPhone app for the measurement of barbell velocity and 1RM on the bench-press exercise.

    Science.gov (United States)

    Balsalobre-Fernández, Carlos; Marchante, David; Muñoz-López, Mario; Jiménez, Sergio L

    2018-01-01

    The purpose of this study was to analyse the validity and reliability of a novel iPhone app (named: PowerLift) for the measurement of mean velocity on the bench-press exercise. Additionally, the accuracy of the estimation of the 1-Repetition maximum (1RM) using the load-velocity relationship was tested. To do this, 10 powerlifters (Mean (SD): age = 26.5 ± 6.5 years; bench press 1RM · kg -1  = 1.34 ± 0.25) completed an incremental test on the bench-press exercise with 5 different loads (75-100% 1RM), while the mean velocity of the barbell was registered using a linear transducer (LT) and Powerlift. Results showed a very high correlation between the LT and the app (r = 0.94, SEE = 0.028 m · s -1 ) for the measurement of mean velocity. Bland-Altman plots (R 2  = 0.011) and intraclass correlation coefficient (ICC = 0.965) revealed a very high agreement between both devices. A systematic bias by which the app registered slightly higher values than the LT (P velocity in the bench-press exercise.

  3. Developing a Crustal and Upper Mantle Velocity Model for the Brazilian Northeast

    Science.gov (United States)

    Julia, J.; Nascimento, R.

    2013-05-01

    Development of 3D models for the earth's crust and upper mantle is important for accurately predicting travel times for regional phases and to improve seismic event location. The Brazilian Northeast is a tectonically active area within stable South America and displays one of the highest levels of seismicity in Brazil, with earthquake swarms containing events up to mb 5.2. Since 2011, seismic activity is routinely monitored through the Rede Sismográfica do Nordeste (RSisNE), a permanent network supported by the national oil company PETROBRAS and consisting of 15 broadband stations with an average spacing of ~200 km. Accurate event locations are required to correctly characterize and identify seismogenic areas in the region and assess seismic hazard. Yet, no 3D model of crustal thickness and crustal and upper mantle velocity variation exists. The first step in developing such models is to refine crustal thickness and depths to major seismic velocity boundaries in the crust and improve on seismic velocity estimates for the upper mantle and crustal layers. We present recent results in crustal and uppermost mantle structure in NE Brazil that will contribute to the development of a 3D model of velocity variation. Our approach has consisted of: (i) computing receiver functions to obtain point estimates of crustal thickness and Vp/Vs ratio and (ii) jointly inverting receiver functions and surface-wave dispersion velocities from an independent tomography study to obtain S-velocity profiles at each station. This approach has been used at all the broadband stations of the monitoring network plus 15 temporary, short-period stations that reduced the inter-station spacing to ~100 km. We expect our contributions will provide the basis to produce full 3D velocity models for the Brazilian Northeast and help determine accurate locations for seismic events in the region.

  4. A new settling velocity model to describe secondary sedimentation

    DEFF Research Database (Denmark)

    Ramin, Elham; Wágner, Dorottya Sarolta; Yde, Lars

    2014-01-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids...... distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges...... associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM...

  5. Towards a new tool to develop a 3-D shear-wave velocity model from converted waves

    Science.gov (United States)

    Colavitti, Leonardo; Hetényi, György

    2017-04-01

    The main target of this work is to develop a new method in which we exploit converted waves to construct a fully 3-D shear-wave velocity model of the crust. A reliable 3-D model is very important in Earth sciences because geological structures may vary significantly in their lateral dimension. In particular, shear-waves provide valuable complementary information with respect to P-waves because they usually guarantee a much better correlation in terms of rock density and mechanical properties, reducing the interpretation ambiguities. Therefore, it is fundamental to develop a new technique to improve structural images and to describe different lithologies in the crust. In this study we start from the analysis of receiver functions (RF, Langston, 1977), which are nowadays largely used for structural investigations based on passive seismic experiments, to map Earth discontinuities at depth. The RF technique is also commonly used to invert for velocity structure beneath single stations. Here, we plan to combine two strengths of RF method: shear-wave velocity inversion and dense arrays. Starting from a simple 3-D forward model, synthetic RFs are obtained extracting the structure along a ray to match observed data. During the inversion, thanks to a dense stations network, we aim to build and develop a multi-layer crustal model for shear-wave velocity. The initial model should be chosen simple to make sure that the inversion process is not influenced by the constraints in terms of depth and velocity posed at the beginning. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999a, b), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter

  6. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  7. Bootstrap inversion for Pn wave velocity in North-Western Italy

    Directory of Open Access Journals (Sweden)

    C. Eva

    1997-06-01

    Full Text Available An inversion of Pn arrival times from regional distance earthquakes (180-800 km, recorded by 94 seismic stations operating in North-Western Italy and surrounding areas, was carried out to image lateral variations of P-wave velocity at the crust-mantle boundary, and to estimate the static delay time at each station. The reliability of the obtained results was assessed using both synthetic tests and the bootstrap Monte Carlo resampling technique. Numerical simulations demonstrated the existence of a trade-off between cell velocities and estimated station delay times along the edge of the model. Bootstrap inversions were carried out to determine the standard deviation of velocities and time terms. Low Pn velocity anomalies are detected beneath the outer side of the Alps (-6% and the Western Po plain (-4% in correspondence with two regions of strong crustal thickening and negative Bouguer anomaly. In contrast, high Pn velocities are imaged beneath the inner side of the Alps (+4% indicating the presence of high velocity and density lower crust-upper mantle. The Ligurian sea shows high Pn velocities close to the Ligurian coastlines (+3% and low Pn velocities (-1.5% in the middle of the basin in agreement with the upper mantle velocity structure revealed by seismic refraction profiles.

  8. Reliability and Validity Assessment of a Linear Position Transducer

    Science.gov (United States)

    Garnacho-Castaño, Manuel V.; López-Lastra, Silvia; Maté-Muñoz, José L.

    2015-01-01

    The objectives of the study were to determine the validity and reliability of peak velocity (PV), average velocity (AV), peak power (PP) and average power (AP) measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain) during two resistance exercises, bench press (BP) and full back squat (BS), performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2). Intraclass correlation coefficients (ICCs) indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W). Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W). Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP) make this device a useful tool for monitoring resistance training. Key points This study determined the validity and reliability of peak velocity, average velocity, peak power and average power measurements made using a linear position transducer The Tendo Weight-lifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and power. PMID:25729300

  9. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  10. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, we use an IFR distribution to develop a reliability model for the EBS

  11. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, an IFR distribution is used to develop a reliability model for the EBS

  12. Velocity profiles in idealized model of human respiratory tract

    Science.gov (United States)

    Elcner, J.; Jedelsky, J.; Lizal, F.; Jicha, M.

    2013-04-01

    This article deals with numerical simulation focused on velocity profiles in idealized model of human upper airways during steady inspiration. Three r gimes of breathing were investigated: Resting condition, Deep breathing and Light activity which correspond to most common regimes used for experiments and simulations. Calculation was validated with experimental data given by Phase Doppler Anemometry performed on the model with same geometry. This comparison was made in multiple points which form one cross-section in trachea near first bifurcation of bronchial tree. Development of velocity profile in trachea during steady inspiration was discussed with respect for common phenomenon formed in trachea and for future research of transport of aerosol particles in human respiratory tract.

  13. Velocity profiles in idealized model of human respiratory tract

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available This article deals with numerical simulation focused on velocity profiles in idealized model of human upper airways during steady inspiration. Three r gimes of breathing were investigated: Resting condition, Deep breathing and Light activity which correspond to most common regimes used for experiments and simulations. Calculation was validated with experimental data given by Phase Doppler Anemometry performed on the model with same geometry. This comparison was made in multiple points which form one cross-section in trachea near first bifurcation of bronchial tree. Development of velocity profile in trachea during steady inspiration was discussed with respect for common phenomenon formed in trachea and for future research of transport of aerosol particles in human respiratory tract.

  14. New Models for Velocity/Pressure-Gradient Correlations in Turbulent Boundary Layers

    Science.gov (United States)

    Poroseva, Svetlana; Murman, Scott

    2014-11-01

    To improve the performance of Reynolds-Averaged Navier-Stokes (RANS) turbulence models, one has to improve the accuracy of models for three physical processes: turbulent diffusion, interaction of turbulent pressure and velocity fluctuation fields, and dissipative processes. The accuracy of modeling the turbulent diffusion depends on the order of a statistical closure chosen as a basis for a RANS model. When the Gram-Charlier series expansions for the velocity correlations are used to close the set of RANS equations, no assumption on Gaussian turbulence is invoked and no unknown model coefficients are introduced into the modeled equations. In such a way, this closure procedure reduces the modeling uncertainty of fourth-order RANS (FORANS) closures. Experimental and direct numerical simulation data confirmed the validity of using the Gram-Charlier series expansions in various flows including boundary layers. We will address modeling the velocity/pressure-gradient correlations. New linear models will be introduced for the second- and higher-order correlations applicable to two-dimensional incompressible wall-bounded flows. Results of models' validation with DNS data in a channel flow and in a zero-pressure gradient boundary layer over a flat plate will be demonstrated. A part of the material is based upon work supported by NASA under award NNX12AJ61A.

  15. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  16. A new approach for modeling dry deposition velocity of particles

    Science.gov (United States)

    Giardina, M.; Buffa, P.

    2018-05-01

    The dry deposition process is recognized as an important pathway among the various removal processes of pollutants in the atmosphere. In this field, there are several models reported in the literature useful to predict the dry deposition velocity of particles of different diameters but many of them are not capable of representing dry deposition phenomena for several categories of pollutants and deposition surfaces. Moreover, their applications is valid for specific conditions and if the data in that application meet all of the assumptions required of the data used to define the model. In this paper a new dry deposition velocity model based on an electrical analogy schema is proposed to overcome the above issues. The dry deposition velocity is evaluated by assuming that the resistances that affect the particle flux in the Quasi-Laminar Sub-layers can be combined to take into account local features of the mutual influence of inertial impact processes and the turbulent one. Comparisons with the experimental data from literature indicate that the proposed model allows to capture with good agreement the main dry deposition phenomena for the examined environmental conditions and deposition surfaces to be determined. The proposed approach could be easily implemented within atmospheric dispersion modeling codes and efficiently addressing different deposition surfaces for several particle pollution.

  17. Comparison of CME radial velocities from a flux rope model and an ice cream cone model

    Science.gov (United States)

    Kim, T.; Moon, Y.; Na, H.

    2011-12-01

    Coronal Mass Ejections (CMEs) on the Sun are the largest energy release process in the solar system and act as the primary driver of geomagnetic storms and other space weather phenomena on the Earth. So it is very important to infer their directions, velocities and three-dimensional structures. In this study, we choose two different models to infer radial velocities of halo CMEs since 2008 : (1) an ice cream cone model by Xue et al (2005) using SOHO/LASCO data, (2) a flux rope model by Thernisien et al. (2009) using the STEREO/SECCHI data. In addition, we use another flux rope model in which the separation angle of flux rope is zero, which is morphologically similar to the ice cream cone model. The comparison shows that the CME radial velocities from among each model have very good correlations (R>0.9). We will extending this comparison to other partial CMEs observed by STEREO and SOHO.

  18. Should tsunami models use a nonzero initial condition for horizontal velocity?

    Science.gov (United States)

    Nava, G.; Lotto, G. C.; Dunham, E. M.

    2017-12-01

    Tsunami propagation in the open ocean is most commonly modeled by solving the shallow water wave equations. These equations require two initial conditions: one on sea surface height and another on depth-averaged horizontal particle velocity or, equivalently, horizontal momentum. While most modelers assume that initial velocity is zero, Y.T. Song and collaborators have argued for nonzero initial velocity, claiming that horizontal displacement of a sloping seafloor imparts significant horizontal momentum to the ocean. They show examples in which this effect increases the resulting tsunami height by a factor of two or more relative to models in which initial velocity is zero. We test this claim with a "full-physics" integrated dynamic rupture and tsunami model that couples the elastic response of the Earth to the linearized acoustic-gravitational response of a compressible ocean with gravity; the model self-consistently accounts for seismic waves in the solid Earth, acoustic waves in the ocean, and tsunamis (with dispersion at short wavelengths). We run several full-physics simulations of subduction zone megathrust ruptures and tsunamis in geometries with a sloping seafloor, using both idealized structures and a more realistic Tohoku structure. Substantial horizontal momentum is imparted to the ocean, but almost all momentum is carried away in the form of ocean acoustic waves. We compare tsunami propagation in each full-physics simulation to that predicted by an equivalent shallow water wave simulation with varying assumptions regarding initial conditions. We find that the initial horizontal velocity conditions proposed by Song and collaborators consistently overestimate the tsunami amplitude and predict an inconsistent wave profile. Finally, we determine tsunami initial conditions that are rigorously consistent with our full-physics simulations by isolating the tsunami waves (from ocean acoustic and seismic waves) at some final time, and backpropagating the tsunami

  19. Stabilization and Riesz basis property for an overhead crane model with feedback in velocity and rotating velocity

    Directory of Open Access Journals (Sweden)

    Toure K. Augustin

    2014-06-01

    Full Text Available This paper studies a variant of an overhead crane model's problem, with a control force in velocity and rotating velocity on the platform. We obtain under certain conditions the well-posedness and the strong stabilization of the closed-loop system. We then analyze the spectrum of the system. Using a method due to Shkalikov, we prove the existence of a sequence of generalized eigenvectors of the system, which forms a Riesz basis for the state energy Hilbert space.

  20. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  1. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    The objective of this project is to develop a method to predict the potential reliability of software to be used in a digital system instrumentation and control system. The reliability prediction is to make use of existing measures of software reliability such as those described in IEEE Std 982 and 982.2. This prediction must be of sufficient accuracy to provide a value for uncertainty that could be used in a nuclear power plant probabilistic risk assessment (PRA). For the purposes of the project, reliability was defined to be the probability that the digital system will successfully perform its intended safety function (for the distribution of conditions under which it is expected to respond) upon demand with no unintended functions that might affect system safety. The ultimate objective is to use the identified measures to develop a method for predicting the potential quantitative reliability of a digital system. The reliability prediction models proposed in this report are conceptual in nature. That is, possible prediction techniques are proposed and trial models are built, but in order to become a useful tool for predicting reliability, the models must be tested, modified according to the results, and validated. Using methods outlined by this project, models could be constructed to develop reliability estimates for elements of software systems. This would require careful review and refinement of the models, development of model parameters from actual experience data or expert elicitation, and careful validation. By combining these reliability estimates (generated from the validated models for the constituent parts) in structural software models, the reliability of the software system could then be predicted. Modeling digital system reliability will also require that methods be developed for combining reliability estimates for hardware and software. System structural models must also be developed in order to predict system reliability based upon the reliability

  2. Field Testing of an In-well Point Velocity Probe for the Rapid Characterization of Groundwater Velocity

    Science.gov (United States)

    Osorno, T.; Devlin, J. F.

    2017-12-01

    Reliable estimates of groundwater velocity is essential in order to best implement in-situ monitoring and remediation technologies. The In-well Point Velocity Probe (IWPVP) is an inexpensive, reusable tool developed for rapid measurement of groundwater velocity at the centimeter-scale in monitoring wells. IWPVP measurements of groundwater speed are based on a small-scale tracer test conducted as ambient groundwater passes through the well screen and the body of the probe. Horizontal flow direction can be determined from the difference in tracer mass passing detectors placed in four funnel-and-channel pathways through the probe, arranged in a cross pattern. The design viability of the IWPVP was confirmed using a two-dimensional numerical model in Comsol Multiphysics, followed by a series of laboratory tank experiments in which IWPVP measurements were calibrated to quantify seepage velocities in both fine and medium sand. Lab results showed that the IWPVP was capable of measuring the seepage velocity in less than 20 minutes per test, when the seepage velocity was in the range of 0.5 to 4.0 m/d. Further, the IWPVP estimated the groundwater speed with a precision of ± 7%, and an accuracy of ± 14%, on average. The horizontal flow direction was determined with an accuracy of ± 15°, on average. Recently, a pilot field test of the IWPVP was conducted in the Borden aquifer, C.F.B. Borden, Ontario, Canada. A total of approximately 44 IWPVP tests were conducted within two 2-inch groundwater monitoring wells comprising a 5 ft. section of #8 commercial well screen. Again, all tests were completed in under 20 minutes. The velocities estimated from IWPVP data were compared to 21 Point Velocity Probe (PVP) tests, as well as Darcy-based estimates of groundwater velocity. Preliminary data analysis shows strong agreement between the IWPVP and PVP estimates of groundwater velocity. Further, both the IWPVP and PVP estimates of groundwater velocity appear to be reasonable when

  3. Flood Water Crossing: Laboratory Model Investigations for Water Velocity Reductions

    Directory of Open Access Journals (Sweden)

    Kasnon N.

    2014-01-01

    Full Text Available The occurrence of floods may give a negative impact towards road traffic in terms of difficulties in mobilizing traffic as well as causing damage to the vehicles, which later cause them to be stuck in the traffic and trigger traffic problems. The high velocity of water flows occur when there is no existence of objects capable of diffusing the water velocity on the road surface. The shape, orientation and size of the object to be placed beside the road as a diffuser are important for the effective flow attenuation of water. In order to investigate the water flow, a laboratory experiment was set up and models were constructed to study the flow velocity reduction. The velocity of water before and after passing through the diffuser objects was investigated. This paper focuses on laboratory experiments to determine the flow velocity of the water using sensors before and after passing through two best diffuser objects chosen from a previous flow pattern experiment.

  4. Force-Velocity Relationship of Upper Body Muscles: Traditional Versus Ballistic Bench Press.

    Science.gov (United States)

    García-Ramos, Amador; Jaric, Slobodan; Padial, Paulino; Feriche, Belén

    2016-04-01

    This study aimed to (1) evaluate the linearity of the force-velocity relationship, as well as the reliability of maximum force (F0), maximum velocity (V0), slope (a), and maximum power (P0); (2) compare these parameters between the traditional and ballistic bench press (BP); and (3) determine the correlation of F0 with the directly measured BP 1-repetition maximum (1RM). Thirty-two men randomly performed 2 sessions of traditional BP and 2 sessions of ballistic BP during 2 consecutive weeks. Both the maximum and mean values of force and velocity were recorded when loaded by 20-70% of 1RM. All force-velocity relationships were strongly linear (r > .99). While F0 and P0 were highly reliable (ICC: 0.91-0.96, CV: 3.8-5.1%), lower reliability was observed for V0 and a (ICC: 0.49-0.81, CV: 6.6-11.8%). Trivial differences between exercises were found for F0 (ES: velocity relationship is useful to assess the upper body maximal capabilities to generate force, velocity, and power.

  5. Hydrodynamic Equations for Flocking Models without Velocity Alignment

    Science.gov (United States)

    Peruani, Fernando

    2017-10-01

    The spontaneous emergence of collective motion patterns is usually associated with the presence of a velocity alignment mechanism that mediates the interactions among the moving individuals. Despite of this widespread view, it has been shown recently that several flocking behaviors can emerge in the absence of velocity alignment and as a result of short-range, position-based, attractive forces that act inside a vision cone. Here, we derive the corresponding hydrodynamic equations of a microscopic position-based flocking model, reviewing and extending previous reported results. In particular, we show that three distinct macroscopic collective behaviors can be observed: i) the coarsening of aggregates with no orientational order, ii) the emergence of static, elongated nematic bands, and iii) the formation of moving, locally polar structures, which we call worms. The derived hydrodynamic equations indicate that active particles interacting via position-based interactions belong to a distinct class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems.

  6. Modeling non-Fickian dispersion by use of the velocity PDF on the pore scale

    Science.gov (United States)

    Kooshapur, Sheema; Manhart, Michael

    2015-04-01

    For obtaining a description of reactive flows in porous media, apart from the geometrical complications of resolving the velocities and scalar values, one has to deal with the additional reactive term in the transport equation. An accurate description of the interface of the reacting fluids - which is strongly influenced by dispersion- is essential for resolving this term. In REV-based simulations the reactive term needs to be modeled taking sub-REV fluctuations and possibly non-Fickian dispersion into account. Non-Fickian dispersion has been observed in strongly heterogeneous domains and in early phases of transport. A fully resolved solution of the Navier-Stokes and transport equations which yields a detailed description of the flow properties, dispersion, interfaces of fluids, etc. however, is not practical for domains containing more than a few thousand grains, due to the huge computational effort required. Through Probability Density Function (PDF) based methods, the velocity distribution in the pore space can facilitate the understanding and modelling of non-Fickian dispersion [1,2]. Our aim is to model the transition between non-Fickian and Fickian dispersion in a random sphere pack within the framework of a PDF based transport model proposed by Meyer and Tchelepi [1,3]. They proposed a stochastic transport model where velocity components of tracer particles are represented by a continuous Markovian stochastic process. In addition to [3], we consider the effects of pore scale diffusion and formulate a different stochastic equation for the increments in velocity space from first principles. To assess the terms in this equation, we performed Direct Numerical Simulations (DNS) for solving the Navier-Stokes equation on a random sphere pack. We extracted the PDFs and statistical moments (up to the 4th moment) of the stream-wise velocity, u, and first and second order velocity derivatives both independent and conditioned on velocity. By using this data and

  7. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  8. Assessment of effectiveness of geologic isolation systems: geostatistical modeling of pore velocity

    International Nuclear Information System (INIS)

    Devary, J.L.; Doctor, P.G.

    1981-06-01

    A significant part of evaluating a geologic formation as a nuclear waste repository involves the modeling of contaminant transport in the surrounding media in the event the repository is breached. The commonly used contaminant transport models are deterministic. However, the spatial variability of hydrologic field parameters introduces uncertainties into contaminant transport predictions. This paper discusses the application of geostatistical techniques to the modeling of spatially varying hydrologic field parameters required as input to contaminant transport analyses. Kriging estimation techniques were applied to Hanford Reservation field data to calculate hydraulic conductivity and the ground-water potential gradients. These quantities were statistically combined to estimate the groundwater pore velocity and to characterize the pore velocity estimation error. Combining geostatistical modeling techniques with product error propagation techniques results in an effective stochastic characterization of groundwater pore velocity, a hydrologic parameter required for contaminant transport analyses

  9. The effect of corrosion on the structural reliability of steel offshore structures

    International Nuclear Information System (INIS)

    Melchers, Robert E.

    2005-01-01

    This paper considers essential theoretical concepts and data requirements for engineering structural reliability assessment suitable for the estimation of the safety and reliability of corroding ships, offshore structures and pipelines. Such infrastructure operates in a harsh environment. Allowance must be made for structural deterioration since protective measures such as paint coatings, galvanizing or cathodic protection may be ineffective. Reliability analysis requires accurate engineering models for the description and prediction of material corrosion loss and for the maximum depth of pitting. New probability-based models for both these forms of corrosion have been proposed recently and calibrated against a wide range of data. The effects of water velocity and of water pollution are reviewed and compared with recently reported field data for a corrosion at an offshore oil platform. The data interpreted according to the model show good correlation when allowance is made for the season of first immersion and the adverse effects of seawater velocity and of water pollution. An example is given to illustrate the application of reliability analysis to a pipeline subject to pitting corrosion. An important outcome is that good quality estimation of the longer-term probability of loss of structural integrity requires good modelling of the longer-term corrosion behaviour. This is usually associated with anaerobic corrosion. As a result, it cannot be extrapolated from data for short-term corrosion as this is associated with aerobic corrosion conditions

  10. The effect of corrosion on the structural reliability of steel offshore structures

    Energy Technology Data Exchange (ETDEWEB)

    Melchers, Robert E. [Centre for Infrastructure Performance and Reliability, Department of Civil, Surveying and Environmental Engineering, School of Engineering, University of Newcastle, University Drive, Callaghan NSW 2300 (Australia)]. E-mail: rob.melchers@newcastle.edu.au

    2005-10-01

    This paper considers essential theoretical concepts and data requirements for engineering structural reliability assessment suitable for the estimation of the safety and reliability of corroding ships, offshore structures and pipelines. Such infrastructure operates in a harsh environment. Allowance must be made for structural deterioration since protective measures such as paint coatings, galvanizing or cathodic protection may be ineffective. Reliability analysis requires accurate engineering models for the description and prediction of material corrosion loss and for the maximum depth of pitting. New probability-based models for both these forms of corrosion have been proposed recently and calibrated against a wide range of data. The effects of water velocity and of water pollution are reviewed and compared with recently reported field data for a corrosion at an offshore oil platform. The data interpreted according to the model show good correlation when allowance is made for the season of first immersion and the adverse effects of seawater velocity and of water pollution. An example is given to illustrate the application of reliability analysis to a pipeline subject to pitting corrosion. An important outcome is that good quality estimation of the longer-term probability of loss of structural integrity requires good modelling of the longer-term corrosion behaviour. This is usually associated with anaerobic corrosion. As a result, it cannot be extrapolated from data for short-term corrosion as this is associated with aerobic corrosion conditions.

  11. Plant and control system reliability and risk model

    International Nuclear Information System (INIS)

    Niemelae, I.M.

    1986-01-01

    A new reliability modelling technique for control systems and plants is demonstrated. It is based on modified boolean algebra and it has been automated into an efficient computer code called RELVEC. The code is useful for getting an overall view of the reliability parameters or for an in-depth reliability analysis, which is essential in risk analysis, where the model must be capable of answering to specific questions like: 'What is the probability of this temperature limiter to provide a false alarm', or 'what is the probability of air pressure in this subsystem to drop below lower limit'. (orig./DG)

  12. Reliability Modeling of Double Beam Bridge Crane

    Science.gov (United States)

    Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li

    2018-05-01

    This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.

  13. Measured and modeled dry deposition velocities over the ESCOMPTE area

    Science.gov (United States)

    Michou, M.; Laville, P.; Serça, D.; Fotiadi, A.; Bouchou, P.; Peuch, V.-H.

    2005-03-01

    Measurements of the dry deposition velocity of ozone have been made by the eddy correlation method during ESCOMPTE (Etude sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions). The strong local variability of natural ecosystems was sampled over several weeks in May, June and July 2001 for four sites with varying surface characteristics. The sites included a maize field, a Mediterranean forest, a Mediterranean shrub-land, and an almost bare soil. Measurements of nitrogen oxide deposition fluxes by the relaxed eddy correlation method have also been carried out at the same bare soil site. An evaluation of the deposition velocities computed by the surface module of the multi-scale Chemistry and Transport Model MOCAGE is presented. This module relies on a resistance approach, with a detailed treatment of the stomatal contribution to the surface resistance. Simulations at the finest model horizontal resolution (around 10 km) are compared to observations. If the seasonal variations are in agreement with the literature, comparisons between raw model outputs and observations, at the different measurement sites and for the specific observing periods, are contrasted. As the simulated meteorology at the scale of 10 km nicely captures the observed situations, the default set of surface characteristics (averaged at the resolution of a grid cell) appears to be one of the main reasons for the discrepancies found with observations. For each case, sensitivity studies have been performed in order to see the impact of adjusting the surface characteristics to the observed ones, when available. Generally, a correct agreement with the observations of deposition velocities is obtained. This advocates for a sub-grid scale representation of surface characteristics for the simulation of dry deposition velocities over such a complex area. Two other aspects appear in the discussion. Firstly, the strong influence of the soil water content to the plant

  14. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    Science.gov (United States)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  15. Space Vehicle Reliability Modeling in DIORAMA

    Energy Technology Data Exchange (ETDEWEB)

    Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-12

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  16. Welding wire velocity modelling and control using an optical sensor

    DEFF Research Database (Denmark)

    Nielsen, Kirsten M.; Pedersen, Tom S.

    2007-01-01

    In this paper a method for controlling the velocity of a welding wire at the tip of the handle is described. The method is an alternative to the traditional welding apparatus control system where the wire velocity is controlled internal in the welding machine implying a poor disturbance reduction....... To obtain the tip velocity a dynamic model of the wire/liner system is developed and verified.  In the wire/liner system it turned out that backlash and reflections are influential factors. An idea for handling the backlash has been suggested. In addition an optical sensor for measuring the wire velocity...... at the tip has been constructed. The optical sensor may be used but some problems due to focusing cause noise in the control loop demanding a more precise mechanical wire feed system or an optical sensor with better focusing characteristics....

  17. Stochastic modeling for reliability shocks, burn-in and heterogeneous populations

    CERN Document Server

    Finkelstein, Maxim

    2013-01-01

    Focusing on shocks modeling, burn-in and heterogeneous populations, Stochastic Modeling for Reliability naturally combines these three topics in the unified stochastic framework and presents numerous practical examples that illustrate recent theoretical findings of the authors.  The populations of manufactured items in industry are usually heterogeneous. However, the conventional reliability analysis is performed under the implicit assumption of homogeneity, which can result in distortion of the corresponding reliability indices and various misconceptions. Stochastic Modeling for Reliability fills this gap and presents the basics and further developments of reliability theory for heterogeneous populations. Specifically, the authors consider burn-in as a method of elimination of ‘weak’ items from heterogeneous populations. The real life objects are operating in a changing environment. One of the ways to model an impact of this environment is via the external shocks occurring in accordance with some stocha...

  18. Mathematical model for logarithmic scaling of velocity fluctuations in wall turbulence.

    Science.gov (United States)

    Mouri, Hideaki

    2015-12-01

    For wall turbulence, moments of velocity fluctuations are known to be logarithmic functions of the height from the wall. This logarithmic scaling is due to the existence of a characteristic velocity and to the nonexistence of any characteristic height in the range of the scaling. By using the mathematics of random variables, we obtain its necessary and sufficient conditions. They are compared with characteristics of a phenomenological model of eddies attached to the wall and also with those of the logarithmic scaling of the mean velocity.

  19. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  20. Modeling of system reliability Petri nets with aging tokens

    International Nuclear Information System (INIS)

    Volovoi, V.

    2004-01-01

    The paper addresses the dynamic modeling of degrading and repairable complex systems. Emphasis is placed on the convenience of modeling for the end user, with special attention being paid to the modeling part of a problem, which is considered to be decoupled from the choice of solution algorithms. Depending on the nature of the problem, these solution algorithms can include discrete event simulation or numerical solution of the differential equations that govern underlying stochastic processes. Such modularity allows a focus on the needs of system reliability modeling and tailoring of the modeling formalism accordingly. To this end, several salient features are chosen from the multitude of existing extensions of Petri nets, and a new concept of aging tokens (tokens with memory) is introduced. The resulting framework provides for flexible and transparent graphical modeling with excellent representational power that is particularly suited for system reliability modeling with non-exponentially distributed firing times. The new framework is compared with existing Petri-net approaches and other system reliability modeling techniques such as reliability block diagrams and fault trees. The relative differences are emphasized and illustrated with several examples, including modeling of load sharing, imperfect repair of pooled items, multiphase missions, and damage-tolerant maintenance. Finally, a simple implementation of the framework using discrete event simulation is described

  1. A math model for high velocity sensoring with a focal plane shuttered camera.

    Science.gov (United States)

    Morgan, P.

    1971-01-01

    A new mathematical model is presented which describes the image produced by a focal plane shutter-equipped camera. The model is based upon the well-known collinearity condition equations and incorporates both the translational and rotational motion of the camera during the exposure interval. The first differentials of the model with respect to exposure interval, delta t, yield the general matrix expressions for image velocities which may be simplified to known cases. The exposure interval, delta t, may be replaced under certain circumstances with a function incorporating blind velocity and image position if desired. The model is tested using simulated Lunar Orbiter data and found to be computationally stable as well as providing excellent results, provided that some external information is available on the velocity parameters.

  2. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  3. Learning reliable manipulation strategies without initial physical models

    Science.gov (United States)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  4. Transparent reliability model for fault-tolerant safety systems

    International Nuclear Information System (INIS)

    Bodsberg, Lars; Hokstad, Per

    1997-01-01

    A reliability model is presented which may serve as a tool for identification of cost-effective configurations and operating philosophies of computer-based process safety systems. The main merit of the model is the explicit relationship in the mathematical formulas between failure cause and the means used to improve system reliability such as self-test, redundancy, preventive maintenance and corrective maintenance. A component failure taxonomy has been developed which allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. Furthermore, the taxonomy distinguishes between failures due to excessive environmental stresses and failures initiated by humans during engineering and operation. Attention has been given to develop a transparent model which provides predictions which are in good agreement with observed system performance, and which is applicable for non-experts in the field of reliability

  5. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  6. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  7. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  8. Analytical study on the criticality of the stochastic optimal velocity model

    International Nuclear Information System (INIS)

    Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2006-01-01

    In recent works, we have proposed a stochastic cellular automaton model of traffic flow connecting two exactly solvable stochastic processes, i.e., the asymmetric simple exclusion process and the zero range process, with an additional parameter. It is also regarded as an extended version of the optimal velocity model, and moreover it shows particularly notable properties. In this paper, we report that when taking optimal velocity function to be a step function, all of the flux-density graph (i.e. the fundamental diagram) can be estimated. We first find that the fundamental diagram consists of two line segments resembling an inversed-λ form, and next identify their end-points from a microscopic behaviour of vehicles. It is notable that by using a microscopic parameter which indicates a driver's sensitivity to the traffic situation, we give an explicit formula for the critical point at which a traffic jam phase arises. We also compare these analytical results with those of the optimal velocity model, and point out the crucial differences between them

  9. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  10. Model-assisted measurements of suspension-feeding flow velocities.

    Science.gov (United States)

    Du Clos, Kevin T; Jones, Ian T; Carrier, Tyler J; Brady, Damian C; Jumars, Peter A

    2017-06-01

    Benthic marine suspension feeders provide an important link between benthic and pelagic ecosystems. The strength of this link is determined by suspension-feeding rates. Many studies have measured suspension-feeding rates using indirect clearance-rate methods, which are based on the depletion of suspended particles. Direct methods that measure the flow of water itself are less common, but they can be more broadly applied because, unlike indirect methods, direct methods are not affected by properties of the cleared particles. We present pumping rates for three species of suspension feeders, the clams Mya arenaria and Mercenaria mercenaria and the tunicate Ciona intestinalis , measured using a direct method based on particle image velocimetry (PIV). Past uses of PIV in suspension-feeding studies have been limited by strong laser reflections that interfere with velocity measurements proximate to the siphon. We used a new approach based on fitting PIV-based velocity profile measurements to theoretical profiles from computational fluid dynamic (CFD) models, which allowed us to calculate inhalant siphon Reynolds numbers ( Re ). We used these inhalant Re and measurements of siphon diameters to calculate exhalant Re , pumping rates, and mean inlet and outlet velocities. For the three species studied, inhalant Re ranged from 8 to 520, and exhalant Re ranged from 15 to 1073. Volumetric pumping rates ranged from 1.7 to 7.4 l h -1 for M . arenaria , 0.3 to 3.6 l h -1 for M . m ercenaria and 0.07 to 0.97 l h -1 for C . intestinalis We also used CFD models based on measured pumping rates to calculate capture regions, which reveal the spatial extent of pumped water. Combining PIV data with CFD models may be a valuable approach for future suspension-feeding studies. © 2017. Published by The Company of Biologists Ltd.

  11. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  12. Experiment research on cognition reliability model of nuclear power plant

    International Nuclear Information System (INIS)

    Zhao Bingquan; Fang Xiang

    1999-01-01

    The objective of the paper is to improve the reliability of operation on real nuclear power plant of operators through the simulation research to the cognition reliability of nuclear power plant operators. The research method of the paper is to make use of simulator of nuclear power plant as research platform, to take present international research model of reliability of human cognition based on three-parameter Weibull distribution for reference, to develop and get the research model of Chinese nuclear power plant operators based on two-parameter Weibull distribution. By making use of two-parameter Weibull distribution research model of cognition reliability, the experiments about the cognition reliability of nuclear power plant operators have been done. Compared with the results of other countries such USA and Hungary, the same results can be obtained, which can do good to the safety operation of nuclear power plant

  13. Reliability in the Rasch Model

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára, K.

    2007-01-01

    Roč. 43, č. 3 (2007), s. 315-326 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : Cronbach's alpha * Rasch model * reliability Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.552, year: 2007 http://dml.cz/handle/10338.dmlcz/135776

  14. A reliability-risk modelling of nuclear rad-waste facilities

    International Nuclear Information System (INIS)

    Lehmann, P.H.; El-Bassioni, A.A.

    1975-01-01

    Rad-waste disposal systems of nuclear power sites are designed and operated to collect, delay, contain, and concentrate radioactive wastes from reactor plant processes such that on-site and off-site exposures to radiation are well below permissible limits. To assist the designer in achieving minimum release/exposure goals, a computerized reliability-risk model has been developed to simulate the rad-waste system. The objectives of the model are to furnish a practical tool for quantifying the effects of changes in system configuration, operation, and equipment, and for the identification of weak segments in the system design. Primarily, the model comprises a marriage of system analysis, reliability analysis, and release-risk assessment. Provisions have been included in the model to permit the optimization of the system design subject to constraints on cost and rad-releases. The system analysis phase involves the preparation of a physical and functional description of the rad-waste facility accompanied by the formation of a system tree diagram. The reliability analysis phase embodies the formulation of appropriate reliability models and the collection of model parameters. Release-risk assessment constitutes the analytical basis whereupon further system and reliability analyses may be warranted. Release-risk represents the potential for release of radioactivity and is defined as the product of an element's unreliability at time, t, and the radioactivity available for release in time interval, Δt. A computer code (RARISK) has been written to simulate the tree diagram of the rad-waste system. Reliability and release-risk results have been generated for cases which examined the process flow paths of typical rad-waste systems, the effects of repair and standby, the variations of equipment failure and repair rates, and changes in system configurations. The essential feature of this model is that a complex system like the rad-waste facility can be easily decomposed into its

  15. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    Energy Technology Data Exchange (ETDEWEB)

    Wardaya, P. D., E-mail: pongga.wardaya@utp.edu.my; Noh, K. A. B. M., E-mail: pongga.wardaya@utp.edu.my; Yusoff, W. I. B. W., E-mail: pongga.wardaya@utp.edu.my [Petroleum Geosciences Department, Universiti Teknologi PETRONAS, Tronoh, Perak, 31750 (Malaysia); Ridha, S. [Petroleum Engineering Department, Universiti Teknologi PETRONAS, Tronoh, Perak, 31750 (Malaysia); Nurhandoko, B. E. B. [Wave Inversion and Subsurface Fluid Imaging Research Laboratory (WISFIR), Dept. of Physics, Institute of Technology Bandung, Bandung, Indonesia and Rock Fluid Imaging Lab, Bandung (Indonesia)

    2014-09-25

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic

  16. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    International Nuclear Information System (INIS)

    Wardaya, P. D.; Noh, K. A. B. M.; Yusoff, W. I. B. W.; Ridha, S.; Nurhandoko, B. E. B.

    2014-01-01

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave

  17. Three-dimensional flow of a nanofluid over a permeable stretching/shrinking surface with velocity slip: A revised model

    Science.gov (United States)

    Jusoh, R.; Nazar, R.; Pop, I.

    2018-03-01

    A reformulation of the three-dimensional flow of a nanofluid by employing Buongiorno's model is presented. A new boundary condition is implemented in this study with the assumption of nanoparticle mass flux at the surface is zero. This condition is practically more realistic since the nanoparticle fraction at the boundary is latently controlled. This study is devoted to investigate the impact of the velocity slip and suction to the flow and heat transfer characteristics of nanofluid. The governing partial differential equations corresponding to the momentum, energy, and concentration are reduced to the ordinary differential equations by utilizing the appropriate transformation. Numerical solutions of the ordinary differential equations are obtained by using the built-in bvp4c function in Matlab. Graphical illustrations displaying the physical influence of the several nanofluid parameters on the flow velocity, temperature, and nanoparticle volume fraction profiles, as well as the skin friction coefficient and the local Nusselt number are provided. The present study discovers the existence of dual solutions at a certain range of parameters. Surprisingly, both of the solutions merge at the stretching sheet indicating that the presence of the velocity slip affects the skin friction coefficients. Stability analysis is carried out to determine the stability and reliability of the solutions. It is found that the first solution is stable while the second solution is not stable.

  18. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  19. Animal models of surgically manipulated flow velocities to study shear stress-induced atherosclerosis.

    Science.gov (United States)

    Winkel, Leah C; Hoogendoorn, Ayla; Xing, Ruoyu; Wentzel, Jolanda J; Van der Heiden, Kim

    2015-07-01

    Atherosclerosis is a chronic inflammatory disease of the arterial tree that develops at predisposed sites, coinciding with locations that are exposed to low or oscillating shear stress. Manipulating flow velocity, and concomitantly shear stress, has proven adequate to promote endothelial activation and subsequent plaque formation in animals. In this article, we will give an overview of the animal models that have been designed to study the causal relationship between shear stress and atherosclerosis by surgically manipulating blood flow velocity profiles. These surgically manipulated models include arteriovenous fistulas, vascular grafts, arterial ligation, and perivascular devices. We review these models of manipulated blood flow velocity from an engineering and biological perspective, focusing on the shear stress profiles they induce and the vascular pathology that is observed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  1. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  2. A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.

    Science.gov (United States)

    Blake, James R; Easson, William J; Hoskins, Peter R

    2009-09-01

    A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.

  3. A First Layered Crustal Velocity Model for the Western Solomon Islands: Inversion of Measured Group Velocity of Surface Waves using Ambient Noise Cross-Correlation

    Science.gov (United States)

    Ku, C. S.; Kuo, Y. T.; Chao, W. A.; You, S. H.; Huang, B. S.; Chen, Y. G.; Taylor, F. W.; Yih-Min, W.

    2017-12-01

    Two earthquakes, MW 8.1 in 2007 and MW 7.1 in 2010, hit the Western Province of Solomon Islands and caused extensive damage, but motivated us to set up the first seismic network in this area. During the first phase, eight broadband seismic stations (BBS) were installed around the rupture zone of 2007 earthquake. With one-year seismic records, we cross-correlated the vertical component of ambient noise recorded in our BBS and calculated Rayleigh-wave group velocity dispersion curves on inter-station paths. The genetic algorithm to invert one-dimensional crustal velocity model is applied by fitting the averaged dispersion curves. The one-dimensional crustal velocity model is constituted by two layers and one half-space, representing the upper crust, lower crust, and uppermost mantle respectively. The resulted thickness values of the upper and lower crust are 6.4 and 14.2 km, respectively. Shear-wave velocities (VS) of the upper crust, lower crust, and uppermost mantle are 2.53, 3.57 and 4.23 km/s with the VP/VS ratios of 1.737, 1.742 and 1.759, respectively. This first layered crustal velocity model can be used as a preliminary reference to further study seismic sources such as earthquake activity and tectonic tremor.

  4. Reliability modeling of Clinch River breeder reactor electrical shutdown systems

    International Nuclear Information System (INIS)

    Schatz, R.A.; Duetsch, K.L.

    1974-01-01

    The initial simulation of the probabilistic properties of the Clinch River Breeder Reactor Plant (CRBRP) electrical shutdown systems is described. A model of the reliability (and availability) of the systems is presented utilizing Success State and continuous-time, discrete state Markov modeling techniques as significant elements of an overall reliability assessment process capable of demonstrating the achievement of program goals. This model is examined for its sensitivity to safe/unsafe failure rates, sybsystem redundant configurations, test and repair intervals, monitoring by reactor operators; and the control exercised over system reliability by design modifications and the selection of system operating characteristics. (U.S.)

  5. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    OpenAIRE

    Chassin, David P.; Posse, Christian

    2004-01-01

    The reliability of electric transmission systems is examined using a scale-free model of network structure and failure propagation. The topologies of the North American eastern and western electric networks are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using s...

  6. Possibilities and limitations of applying software reliability growth models to safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2007-01-01

    It is generally known that software reliability growth models such as the Jelinski-Moranda model and the Goel-Okumoto's Non-Homogeneous Poisson Process (NHPP) model cannot be applied to safety-critical software due to a lack of software failure data. In this paper, by applying two of the most widely known software reliability growth models to sample software failure data, we demonstrate the possibility of using the software reliability growth models to prove the high reliability of safety-critical software. The high sensitivity of a piece of software's reliability to software failure data, as well as a lack of sufficient software failure data, is also identified as a possible limitation when applying the software reliability growth models to safety-critical software

  7. Assessing the impact of uncertainty on flood risk estimates with reliability analysis using 1-D and 2-D hydraulic models

    Directory of Open Access Journals (Sweden)

    L. Altarejos-García

    2012-07-01

    Full Text Available This paper addresses the use of reliability techniques such as Rosenblueth's Point-Estimate Method (PEM as a practical alternative to more precise Monte Carlo approaches to get estimates of the mean and variance of uncertain flood parameters water depth and velocity. These parameters define the flood severity, which is a concept used for decision-making in the context of flood risk assessment. The method proposed is particularly useful when the degree of complexity of the hydraulic models makes Monte Carlo inapplicable in terms of computing time, but when a measure of the variability of these parameters is still needed. The capacity of PEM, which is a special case of numerical quadrature based on orthogonal polynomials, to evaluate the first two moments of performance functions such as the water depth and velocity is demonstrated in the case of a single river reach using a 1-D HEC-RAS model. It is shown that in some cases, using a simple variable transformation, statistical distributions of both water depth and velocity approximate the lognormal. As this distribution is fully defined by its mean and variance, PEM can be used to define the full probability distribution function of these flood parameters and so allowing for probability estimations of flood severity. Then, an application of the method to the same river reach using a 2-D Shallow Water Equations (SWE model is performed. Flood maps of mean and standard deviation of water depth and velocity are obtained, and uncertainty in the extension of flooded areas with different severity levels is assessed. It is recognized, though, that whenever application of Monte Carlo method is practically feasible, it is a preferred approach.

  8. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  9. Models for reliability and management of NDT data

    International Nuclear Information System (INIS)

    Simola, K.

    1997-01-01

    In this paper the reliability of NDT measurements was approached from three directions. We have modelled the flaw sizing performance, the probability of flaw detection, and developed models to update the knowledge of true flaw size based on sequential measurement results and flaw sizing reliability model. In discussed models the measured flaw characteristics (depth, length) are assumed to be simple functions of the true characteristics and random noise corresponding to measurement errors, and the models are based on logarithmic transforms. Models for Bayesian updating of the flaw size distributions were developed. Using these models, it is possible to take into account the prior information of the flaw size and combine it with the measured results. A Bayesian approach could contribute e. g. to the definition of an appropriate combination of practical assessments and technical justifications in NDT system qualifications, as expressed by the European regulatory bodies

  10. Modelling Velocity Spectra in the Lower Part of the Planetary Boundary Layer

    DEFF Research Database (Denmark)

    Olesen, H.R.; Larsen, Søren Ejling; Højstrup, Jørgen

    1984-01-01

    of the planetary boundary layer. Knowledge of the variation with stability of the (reduced) frequency f, for the spectral maximum is utilized in this modelling. Stable spectra may be normalized so that they adhere to one curve only, irrespective of stability, and unstable w-spectra may also be normalized to fit...... one curve. The problem of using filtered velocity variances when modelling spectra is discussed. A simplified procedure to provide a first estimate of the filter effect is given. In stable, horizontal velocity spectra, there is often a ‘gap’ at low frequencies. Using dimensional considerations...... and the spectral model previously derived, an expression for the gap frequency is found....

  11. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: I. Model and velocity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, J [Cardiovascular Research Group Physics, University of New England, Armidale, NSW 2351 (Australia); Buick, J M [Department of Mechanical and Design Engineering, University of Portsmouth, Anglesea Building, Anglesea Road, Portsmouth PO1 3DJ (United Kingdom)

    2008-10-21

    Numerical modelling is a powerful tool in the investigation of human blood flow and arterial diseases such as atherosclerosis. It is known that near wall velocity and shear are important in the pathogenesis and progression of atherosclerosis. In this paper results for a simulation of blood flow in a three-dimensional carotid artery geometry using the lattice Boltzmann method are presented. The velocity fields in the body of the fluid are analysed at six times of interest during a physiologically accurate velocity waveform. It is found that the three-dimensional model agrees well with previous literature results for carotid artery flow. Regions of low near wall velocity and circulatory flow are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery, which are regions that are typically prone to atherosclerosis.

  12. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: I. Model and velocity analysis

    International Nuclear Information System (INIS)

    Boyd, J; Buick, J M

    2008-01-01

    Numerical modelling is a powerful tool in the investigation of human blood flow and arterial diseases such as atherosclerosis. It is known that near wall velocity and shear are important in the pathogenesis and progression of atherosclerosis. In this paper results for a simulation of blood flow in a three-dimensional carotid artery geometry using the lattice Boltzmann method are presented. The velocity fields in the body of the fluid are analysed at six times of interest during a physiologically accurate velocity waveform. It is found that the three-dimensional model agrees well with previous literature results for carotid artery flow. Regions of low near wall velocity and circulatory flow are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery, which are regions that are typically prone to atherosclerosis.

  13. High-velocity two-phase flow two-dimensional modeling

    International Nuclear Information System (INIS)

    Mathes, R.; Alemany, A.; Thilbault, J.P.

    1995-01-01

    The two-phase flow in the nozzle of a LMMHD (liquid metal magnetohydrodynamic) converter has been studied numerically and experimentally. A two-dimensional model for two-phase flow has been developed including the viscous terms (dragging and turbulence) and the interfacial mass, momentum and energy transfer between the phases. The numerical results were obtained by a finite volume method based on the SIMPLE algorithm. They have been verified by an experimental facility using air-water as a simulation pair and a phase Doppler particle analyzer for velocity and droplet size measurement. The numerical simulation of a lithium-cesium high-temperature pair showed that a nearly homogeneous and isothermal expansion of the two phases is possible with small pressure losses and high kinetic efficiencies. In the throat region a careful profiling is necessary to reduce the inertial effects on the liquid velocity field

  14. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  15. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    Science.gov (United States)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of

  16. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  17. Reliability of using nondestructive tests to estimate compressive strength of building stones and bricks

    Directory of Open Access Journals (Sweden)

    Ali Abd Elhakam Aliabdo

    2012-09-01

    Full Text Available This study aims to investigate the relationships between Schmidt hardness rebound number (RN and ultrasonic pulse velocity (UPV versus compressive strength (fc of stones and bricks. Four types of rocks (marble, pink lime stone, white lime stone and basalt and two types of burned bricks and lime-sand bricks were studied. Linear and non-linear models were proposed. High correlations were found between RN and UPV versus compressive strength. Validation of proposed models was assessed using other specimens for each material. Linear models for each material showed good correlations than non-linear models. General model between RN and compressive strength of tested stones and bricks showed a high correlation with regression coefficient R2 value of 0.94. Estimation of compressive strength for the studied stones and bricks using their rebound number and ultrasonic pulse velocity in a combined method was generally more reliable than using rebound number or ultrasonic pulse velocity only.

  18. The large-scale peculiar velocity field in flat models of the universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Turner, M.S.

    1986-10-01

    The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs

  19. Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions

    Science.gov (United States)

    Kim, A.; Dreger, D.; Larsen, S.

    2008-12-01

    We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0

  20. Modular reliability modeling of the TJNAF personnel safety system

    International Nuclear Information System (INIS)

    Cinnamon, J.; Mahoney, K.

    1997-01-01

    A reliability model for the Thomas Jefferson National Accelerator Facility (formerly CEBAF) personnel safety system has been developed. The model, which was implemented using an Excel spreadsheet, allows simulation of all or parts of the system. Modularity os the model's implementation allows rapid open-quotes what if open-quotes case studies to simulate change in safety system parameters such as redundancy, diversity, and failure rates. Particular emphasis is given to the prediction of failure modes which would result in the failure of both of the redundant safety interlock systems. In addition to the calculation of the predicted reliability of the safety system, the model also calculates availability of the same system. Such calculations allow the user to make tradeoff studies between reliability and availability, and to target resources to improving those parts of the system which would most benefit from redesign or upgrade. The model includes calculated, manufacturer's data, and Jefferson Lab field data. This paper describes the model, methods used, and comparison of calculated to actual data for the Jefferson Lab personnel safety system. Examples are given to illustrate the model's utility and ease of use

  1. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  2. Modelling seasonal meltwater forcing of the velocity of land-terminating margins of the Greenland Ice Sheet

    Science.gov (United States)

    Koziol, Conrad P.; Arnold, Neil

    2018-03-01

    Surface runoff at the margin of the Greenland Ice Sheet (GrIS) drains to the ice-sheet bed, leading to enhanced summer ice flow. Ice velocities show a pattern of early summer acceleration followed by mid-summer deceleration due to evolution of the subglacial hydrology system in response to meltwater forcing. Modelling the integrated hydrological-ice dynamics system to reproduce measured velocities at the ice margin remains a key challenge for validating the present understanding of the system and constraining the impact of increasing surface runoff rates on dynamic ice mass loss from the GrIS. Here we show that a multi-component model incorporating supraglacial, subglacial, and ice dynamic components applied to a land-terminating catchment in western Greenland produces modelled velocities which are in reasonable agreement with those observed in GPS records for three melt seasons of varying melt intensities. This provides numerical support for the hypothesis that the subglacial system develops analogously to alpine glaciers and supports recent model formulations capturing the transition between distributed and channelized states. The model shows the growth of efficient conduit-based drainage up-glacier from the ice sheet margin, which develops more extensively, and further inland, as melt intensity increases. This suggests current trends of decadal-timescale slowdown of ice velocities in the ablation zone may continue in the near future. The model results also show a strong scaling between average summer velocities and melt season intensity, particularly in the upper ablation area. Assuming winter velocities are not impacted by channelization, our model suggests an upper bound of a 25 % increase in annual surface velocities as surface melt increases to 4 × present levels.

  3. Development of a State-Wide 3-D Seismic Tomography Velocity Model for California

    Science.gov (United States)

    Thurber, C. H.; Lin, G.; Zhang, H.; Hauksson, E.; Shearer, P.; Waldhauser, F.; Hardebeck, J.; Brocher, T.

    2007-12-01

    We report on progress towards the development of a state-wide tomographic model of the P-wave velocity for the crust and uppermost mantle of California. The dataset combines first arrival times from earthquakes and quarry blasts recorded on regional network stations and travel times of first arrivals from explosions and airguns recorded on profile receivers and network stations. The principal active-source datasets are Geysers-San Pablo Bay, Imperial Valley, Livermore, W. Mojave, Gilroy-Coyote Lake, Shasta region, Great Valley, Morro Bay, Mono Craters-Long Valley, PACE, S. Sierras, LARSE 1 and 2, Loma Prieta, BASIX, San Francisco Peninsula and Parkfield. Our beta-version model is coarse (uniform 30 km horizontal and variable vertical gridding) but is able to image the principal features in previous separate regional models for northern and southern California, such as the high-velocity subducting Gorda Plate, upper to middle crustal velocity highs beneath the Sierra Nevada and much of the Coast Ranges, the deep low-velocity basins of the Great Valley, Ventura, and Los Angeles, and a high- velocity body in the lower crust underlying the Great Valley. The new state-wide model has improved areal coverage compared to the previous models, and extends to greater depth due to the data at large epicentral distances. We plan a series of steps to improve the model. We are enlarging and calibrating the active-source dataset as we obtain additional picks from investigators and perform quality control analyses on the existing and new picks. We will also be adding data from more quarry blasts, mainly in northern California, following an identification and calibration procedure similar to Lin et al. (2006). Composite event construction (Lin et al., in press) will be carried out for northern California for use in conventional tomography. A major contribution of the state-wide model is the identification of earthquakes yielding arrival times at both the Northern California Seismic

  4. Velocity Loss as a Variable for Monitoring Resistance Exercise.

    Science.gov (United States)

    González-Badillo, Juan José; Yañez-García, Juan Manuel; Mora-Custodio, Ricardo; Rodríguez-Rosell, David

    2017-03-01

    This study aimed to analyze: 1) the pattern of repetition velocity decline during a single set to failure against different submaximal loads (50-85% 1RM) in the bench press exercise; and 2) the reliability of the percentage of performed repetitions, with respect to the maximum possible number that can be completed, when different magnitudes of velocity loss have been reached within each set. Twenty-two men performed 8 tests of maximum number of repetitions (MNR) against loads of 50-55-60-65-70-75-80-85% 1RM, in random order, every 6-7 days. Another 28 men performed 2 separate MNR tests against 60% 1RM. A very close relationship was found between the relative loss of velocity in a set and the percentage of performed repetitions. This relationship was very similar for all loads, but particularly for 50-70% 1RM, even though the number of repetitions completed at each load was significantly different. Moreover, the percentage of performed repetitions for a given velocity loss showed a high absolute reliability. Equations to predict the percentage of performed repetitions from relative velocity loss are provided. By monitoring repetition velocity and using these equations, one can estimate, with considerable precision, how many repetitions are left in reserve in a bench press exercise set. © Georg Thieme Verlag KG Stuttgart · New York.

  5. Towards a reliable animal model of migraine

    DEFF Research Database (Denmark)

    Olesen, Jes; Jansen-Olesen, Inger

    2012-01-01

    The pharmaceutical industry shows a decreasing interest in the development of drugs for migraine. One of the reasons for this could be the lack of reliable animal models for studying the effect of acute and prophylactic migraine drugs. The infusion of glyceryl trinitrate (GTN) is the best validated...... and most studied human migraine model. Several attempts have been made to transfer this model to animals. The different variants of this model are discussed as well as other recent models....

  6. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....

  7. Modeling and Velocity Tracking Control for Tape Drive System ...

    African Journals Online (AJOL)

    Modeling and Velocity Tracking Control for Tape Drive System. ... Journal of Applied Sciences and Environmental Management ... The result of the study revealed that 7.07, 8 and 10 of koln values met the design goal and also resulted in optimal control performance with the following characteristics 7.31%,7.71% , 9.41% ...

  8. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  9. In Vivo Validation of a Blood Vector Velocity Estimator with MR Angiography

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten

    2009-01-01

    Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound beam direction. This implies that a Doppler angle under examination close to 90° results in unreliable information about the true blood direction and blood velocity. The novel method...... indicate that reliable vector velocity estimates can be obtained in vivo using the presented angle-independent 2-D vector velocity method. The TO method can be a useful alternative to conventional Doppler systems by avoiding the angle artifact, thus giving quantitative velocity information....

  10. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  11. Modification of Spalart-Allmaras model with consideration of turbulence energy backscatter using velocity helicity

    International Nuclear Information System (INIS)

    Liu, Yangwei; Lu, Lipeng; Fang, Le; Gao, Feng

    2011-01-01

    The correlation between the velocity helicity and the energy backscatter is proved in a DNS case of 256 3 -grid homogeneous isotropic decaying turbulence. The helicity is then proposed to be employed to improve turbulence models and SGS models. Then Spalart-Allmaras turbulence model (SA) is modified with the helicity to take account of the energy backscatter, which is significant in the region of corner separation in compressors. By comparing the numerical results with experiments, it can be concluded that the modification for SA model with helicity can appropriately represent the energy backscatter, and greatly improves the predictive accuracy for simulating the corner separation flow in compressors. -- Highlights: → We study the relativity between the velocity helicity and the energy backscatter. → Spalart-Allmaras turbulence model is modified with the velocity helicity. → The modified model is employed to simulate corner separation in compressor cascade. → The modification can greatly improve the accuracy for predicting corner separation. → The helicity can represent the energy backscatter in turbulence and SGS models.

  12. A California statewide three-dimensional seismic velocity model from both absolute and differential times

    Science.gov (United States)

    Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.

    2010-01-01

    We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.

  13. Velocity-mass correlation of the O-type stars: model results

    International Nuclear Information System (INIS)

    Stone, R.C.

    1982-01-01

    This paper presents new model results describing the evolution of massive close binaries from their initial ZAMS to post-supernova stages. Unlike the previous conservative study by Stone [Astrophys. J. 232, 520 (1979) (Paper II)], these results allow explicitly for mass loss from the binary system occurring during the core hydrogen- and helium-burning stages of the primary binary star as well as during the Roche lobe overflow. Because of uncertainties in these rates, model results are given for several reasonable choices for these rates. All of the models consistently predict an increasing relation between the peculiar space velocities and masses for runaway OB stars which agrees well with the observed correlations discussed in Stone [Astron. J. 86, 544 (1981) (Paper III)] and also predict a lower limit at Mroughly-equal11M/sub sun/ for the masses of runaway stars, in agreement with the observational limit found by A. Blaauw (Bull. Astron. Inst. Neth. 15, 265, 1961), both of which support the binary-supernova scenario described by van den Heuvel and Heise for the origin of runaway stars. These models also predict that the more massive O stars will produce correspondingly more massive compact remnants, and that most binaries experiencing supernova-induced kick velocities of magnitude V/sub k/> or approx. =300 km s -1 will disrupt following the explosions. The best estimate for this velocity as established from pulsar observations is V/sub k/roughly-equal150 km s -1 , in which case probably only 15% if these binaries will be disrupted by the supernova explosions, and therefore, almost all runaway stars should have either neutron star or black hole companions

  14. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  15. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  16. On modeling human reliability in space flights - Redundancy and recovery operations

    Science.gov (United States)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  17. Lane-changing behavior and its effect on energy dissipation using full velocity difference model

    Science.gov (United States)

    Wang, Jian; Ding, Jian-Xun; Shi, Qin; Kühne, Reinhart D.

    2016-07-01

    In real urban traffic, roadways are usually multilane with lane-specific velocity limits. Most previous researches are derived from single-lane car-following theory which in the past years has been extensively investigated and applied. In this paper, we extend the continuous single-lane car-following model (full velocity difference model) to simulate the three-lane-changing behavior on an urban roadway which consists of three lanes. To meet incentive and security requirements, a comprehensive lane-changing rule set is constructed, taking safety distance and velocity difference into consideration and setting lane-specific speed restriction for each lane. We also investigate the effect of lane-changing behavior on distribution of cars, velocity, headway, fundamental diagram of traffic and energy dissipation. Simulation results have demonstrated asymmetric lane-changing “attraction” on changeable lane-specific speed-limited roadway, which leads to dramatically increasing energy dissipation.

  18. Collective cell migration without proliferation: density determines cell velocity and wave velocity

    Science.gov (United States)

    Tlili, Sham; Gauquelin, Estelle; Li, Brigitte; Cardoso, Olivier; Ladoux, Benoît; Delanoë-Ayari, Hélène; Graner, François

    2018-05-01

    Collective cell migration contributes to embryogenesis, wound healing and tumour metastasis. Cell monolayer migration experiments help in understanding what determines the movement of cells far from the leading edge. Inhibiting cell proliferation limits cell density increase and prevents jamming; we observe long-duration migration and quantify space-time characteristics of the velocity profile over large length scales and time scales. Velocity waves propagate backwards and their frequency depends only on cell density at the moving front. Both cell average velocity and wave velocity increase linearly with the cell effective radius regardless of the distance to the front. Inhibiting lamellipodia decreases cell velocity while waves either disappear or have a lower frequency. Our model combines conservation laws, monolayer mechanical properties and a phenomenological coupling between strain and polarity: advancing cells pull on their followers, which then become polarized. With reasonable values of parameters, this model agrees with several of our experimental observations. Together, our experiments and model disantangle the respective contributions of active velocity and of proliferation in monolayer migration, explain how cells maintain their polarity far from the moving front, and highlight the importance of strain-polarity coupling and density in long-range information propagation.

  19. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  20. Modeling delamination of FRP laminates under low velocity impact

    Science.gov (United States)

    Jiang, Z.; Wen, H. M.; Ren, S. L.

    2017-09-01

    Fiber reinforced plastic laminates (FRP) have been increasingly used in various engineering such as aeronautics, astronautics, transportation, naval architecture and their impact response and failure are a major concern in academic community. A new numerical model is suggested for fiber reinforced plastic composites. The model considers that FRP laminates has been constituted by unidirectional laminated plates with adhesive layers. A modified adhesive layer damage model that considering strain rate effects is incorporated into the ABAQUS / EXPLICIT finite element program by the user-defined material subroutine VUMAT. It transpires that the present model predicted delamination is in good agreement with the experimental results for low velocity impact.

  1. Possibilities and Limitations of Applying Software Reliability Growth Models to Safety- Critical Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2006-01-01

    As digital systems are gradually introduced to nuclear power plants (NPPs), the need of quantitatively analyzing the reliability of the digital systems is also increasing. Kang and Sung identified (1) software reliability, (2) common-cause failures (CCFs), and (3) fault coverage as the three most critical factors in the reliability analysis of digital systems. For the estimation of the safety-critical software (the software that is used in safety-critical digital systems), the use of Bayesian Belief Networks (BBNs) seems to be most widely used. The use of BBNs in reliability estimation of safety-critical software is basically a process of indirectly assigning a reliability based on various observed information and experts' opinions. When software testing results or software failure histories are available, we can use a process of directly estimating the reliability of the software using various software reliability growth models such as Jelinski- Moranda model and Goel-Okumoto's nonhomogeneous Poisson process (NHPP) model. Even though it is generally known that software reliability growth models cannot be applied to safety-critical software due to small number of expected failure data from the testing of safety-critical software, we try to find possibilities and corresponding limitations of applying software reliability growth models to safety critical software

  2. Modeling high-Power Accelerators Reliability-SNS LINAC (SNS-ORNL); MAX LINAC (MYRRHA)

    International Nuclear Information System (INIS)

    Pitigoi, A. E.; Fernandez Ramos, P.

    2013-01-01

    Improving reliability has recently become a very important objective in the field of particle accelerators. The particle accelerators in operation are constantly undergoing modifications, and improvements are implemented using new technologies, more reliable components or redundant schemes (to obtain more reliability, strength, more power, etc.) A reliability model of SNS (Spallation Neutron Source) LINAC has been developed within MAX project and analysis of the accelerator systems reliability has been performed within the MAX project, using the Risk Spectrum reliability analysis software. The analysis results have been evaluated by comparison with the SNS operational data. Results and conclusions are presented in this paper, oriented to identify design weaknesses and provide recommendations for improving reliability of MYRRHA linear accelerator. The SNS reliability model developed for the MAX preliminary design phase indicates possible avenues for further investigation that could be needed to improve the reliability of the high-power accelerators, in view of the future reliability targets of ADS accelerators.

  3. Reliability of provocative tests of motion sickness susceptibility

    Science.gov (United States)

    Calkins, D. S.; Reschke, M. F.; Kennedy, R. S.; Dunlop, W. P.

    1987-01-01

    Test-retest reliability values were derived from motion sickness susceptibility scores obtained from two successive exposures to each of three tests: (1) Coriolis sickness sensitivity test; (2) staircase velocity movement test; and (3) parabolic flight static chair test. The reliability of the three tests ranged from 0.70 to 0.88. Normalizing values from predictors with skewed distributions improved the reliability.

  4. Velocity Model Analysis Based on Integrated Well and Seismic Data of East Java Basin

    Science.gov (United States)

    Mubin, Fathul; Widya, Aviandy; Eka Nurcahya, Budi; Nurul Mahmudah, Erma; Purwaman, Indro; Radityo, Aryo; Shirly, Agung; Nurwani, Citra

    2018-03-01

    Time to depth conversion is an important processof seismic interpretationtoidentify hydrocarbonprospectivity. Main objectives of this research are to minimize the risk of error in geometry and time to depth conversion. Since it’s using a large amount of data and had been doing in the large scale of research areas, this research can be classified as a regional scale research. The research was focused on three horizons time interpretation: Top Kujung I, Top Ngimbang and Basement which located in the offshore and onshore areas of east Java basin. These three horizons was selected because they were assumed to be equivalent to the rock formation, which is it has always been the main objective of oil and gas exploration in the East Java Basin. As additional value, there was no previous works on velocity modeling for regional scale using geological parameters in East Java basin. Lithology and interval thickness were identified as geological factors that effected the velocity distribution in East Java Basin. Therefore, a three layer geological model was generated, which was defined by the type of lithology; carbonate (layer 1: Top Kujung I), shale (layer 2: Top Ngimbang) and Basement. A statistical method using three horizons is able to predict the velocity distribution on sparse well data in a regional scale. The average velocity range for Top Kujung I is 400 m/s - 6000 m/s, Top Ngimbang is 500 m/s - 8200 m/s and Basement is 600 m/s - 8000 m/s. Some velocity anomalies found in Madura sub-basin area, caused by geological factor which identified as thick shale deposit and high density values on shale. Result of velocity and depth modeling analysis can be used to define the volume range deterministically and to make geological models to prospect generation in details by geological concept.

  5. Agradient velocity, vortical motion and gravity waves in a rotating shallow-water model

    Science.gov (United States)

    Sutyrin Georgi, G.

    2004-07-01

    A new approach to modelling slow vortical motion and fast inertia-gravity waves is suggested within the rotating shallow-water primitive equations with arbitrary topography. The velocity is exactly expressed as a sum of the gradient wind, described by the Bernoulli function,B, and the remaining agradient part, proportional to the velocity tendency. Then the equation for inverse potential vorticity,Q, as well as momentum equations for agradient velocity include the same source of intrinsic flow evolution expressed as a single term J (B, Q), where J is the Jacobian operator (for any steady state J (B, Q) = 0). Two components of agradient velocity are responsible for the fast inertia-gravity wave propagation similar to the traditionally used divergence and ageostrophic vorticity. This approach allows for the construction of balance relations for vortical dynamics and potential vorticity inversion schemes even for moderate Rossby and Froude numbers assuming the characteristic value of |J(B, Q)| = to be small. The components of agradient velocity are used as the fast variables slaved to potential vorticity that allows for diagnostic estimates of the velocity tendency, the direct potential vorticity inversion with the accuracy of 2 and the corresponding potential vorticity-conserving agradient velocity balance model (AVBM). The ultimate limitations of constructing the balance are revealed in the form of the ellipticity condition for balanced tendency of the Bernoulli function which incorporates both known criteria of the formal stability: the gradient wind modified by the characteristic vortical Rossby wave phase speed should be subcritical. The accuracy of the AVBM is illustrated by considering the linear normal modes and coastal Kelvin waves in the f-plane channel with topography.

  6. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  7. Simultaneous inversion for hypocenters and lateral velocity variation: An iterative solution with a layered model

    Energy Technology Data Exchange (ETDEWEB)

    Hawley, B.W.; Zandt, G.; Smith, R.B.

    1981-08-10

    An iterative inversion technique has been developed that uses the direct P and S wave arrival times from local earthquakes to compute simultaneously a three-dimensional velocity structure and relocated hypocenters. Crustal structure is modeled by subdiving flat layers into rectangular blocks. An interpolation function is used to smoothly vary velocities between blocks, allowing ray trace calculations of travel times in a three-dimensional medium. Tests using synthetic data from known models show that solutions are reasonably independent of block size and spatial distribution but are sensitive to the choice of layer thicknesses. Application of the technique to observed earthquake data from north-central Utah shown the following: (1) lateral velcoity variations in the crust as large as 7% occur over 30-km distance, (2) earthquake epicenters computed with the three-dimensional velocity structure were shifted an average of 3.0 km from location determined assuming homogeneous flat layered models, and (3) the laterally varying velocity structure correlates with anomalous variations in the local gravity and aeromagnetic fields, suggesting that the new velocity information can be valuable in acquiring a better understanding of crustal structure.

  8. RadVel: The Radial Velocity Modeling Toolkit

    Science.gov (United States)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-04-01

    RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.

  9. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  10. A Survey of Software Reliability Modeling and Estimation

    Science.gov (United States)

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  11. A multi-state reliability evaluation model for P2P networks

    International Nuclear Information System (INIS)

    Fan Hehong; Sun Xiaohan

    2010-01-01

    The appearance of new service types and the convergence tendency of the communication networks have endowed the networks more and more P2P (peer to peer) properties. These networks can be more robust and tolerant for a series of non-perfect operational states due to the non-deterministic server-client distributions. Thus a reliability model taking into account of the multi-state and non-deterministic server-client distribution properties is needed for appropriate evaluation of the networks. In this paper, two new performance measures are defined to quantify the overall and local states of the networks. A new time-evolving state-transition Monte Carlo (TEST-MC) simulation model is presented for the reliability analysis of P2P networks in multiple states. The results show that the model is not only valid for estimating the traditional binary-state network reliability parameters, but also adequate for acquiring the parameters in a series of non-perfect operational states, with good efficiencies, especially for highly reliable networks. Furthermore, the model is versatile for the reliability and maintainability analyses in that both the links and the nodes can be failure-prone with arbitrary life distributions, and various maintainability schemes can be applied.

  12. Shallow Crustal Structure in the Northern Salton Trough, California: Insights from a Detailed 3-D Velocity Model

    Science.gov (United States)

    Ajala, R.; Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2017-12-01

    The Coachella Valley is the northern extent of the Gulf of California-Salton Trough. It contains the southernmost segment of the San Andreas Fault (SAF) for which a magnitude 7.8 earthquake rupture was modeled to help produce earthquake planning scenarios. However, discrepancies in ground motion and travel-time estimates from the current Southern California Earthquake Center (SCEC) velocity model of the Salton Trough highlight inaccuracies in its shallow velocity structure. An improved 3-D velocity model that better defines the shallow basin structure and enables the more accurate location of earthquakes and identification of faults is therefore essential for seismic hazard studies in this area. We used recordings of 126 explosive shots from the 2011 Salton Seismic Imaging Project (SSIP) to SSIP receivers and Southern California Seismic Network (SCSN) stations. A set of 48,105 P-wave travel time picks constituted the highest-quality input to a 3-D tomographic velocity inversion. To improve the ray coverage, we added network-determined first arrivals at SCSN stations from 39,998 recently relocated local earthquakes, selected to a maximum focal depth of 10 km, to develop a detailed 3-D P-wave velocity model for the Coachella Valley with 1-km grid spacing. Our velocity model shows good resolution ( 50 rays/cubic km) down to a minimum depth of 7 km. Depth slices from the velocity model reveal several interesting features. At shallow depths ( 3 km), we observe an elongated trough of low velocity, attributed to sediments, located subparallel to and a few km SW of the SAF, and a general velocity structure that mimics the surface geology of the area. The persistence of the low-velocity sediments to 5-km depth just north of the Salton Sea suggests that the underlying basement surface, shallower to the NW, dips SE, consistent with interpretation from gravity studies (Langenheim et al., 2005). On the western side of the Coachella Valley, we detect depth-restricted regions of

  13. Developing regionalized models of lithospheric thickness and velocity structure across Eurasia and the Middle East from jointly inverting P-wave and S-wave receiver functions with Rayleigh wave group and phase velocities

    Energy Technology Data Exchange (ETDEWEB)

    Julia, J; Nyblade, A; Hansen, S; Rodgers, A; Matzel, E

    2009-07-06

    In this project, we are developing models of lithospheric structure for a wide variety of tectonic regions throughout Eurasia and the Middle East by regionalizing 1D velocity models obtained by jointly inverting P-wave and S-wave receiver functions with Rayleigh wave group and phase velocities. We expect the regionalized velocity models will improve our ability to predict travel-times for local and regional phases, such as Pg, Pn, Sn and Lg, as well as travel-times for body-waves at upper mantle triplication distances in both seismic and aseismic regions of Eurasia and the Middle East. We anticipate the models will help inform and strengthen ongoing and future efforts within the NNSA labs to develop 3D velocity models for Eurasia and the Middle East, and will assist in obtaining model-based predictions where no empirical data are available and for improving locations from sparse networks using kriging. The codes needed to conduct the joint inversion of P-wave receiver functions (PRFs), S-wave receiver functions (SRFs), and dispersion velocities have already been assembled as part of ongoing research on lithospheric structure in Africa. The methodology has been tested with synthetic 'data' and case studies have been investigated with data collected at an open broadband stations in South Africa. PRFs constrain the size and S-P travel-time of seismic discontinuities in the crust and uppermost mantle, SRFs constrain the size and P-S travel-time of the lithosphere-asthenosphere boundary, and dispersion velocities constrain average S-wave velocity within frequency-dependent depth-ranges. Preliminary results show that the combination yields integrated 1D velocity models local to the recording station, where the discontinuities constrained by the receiver functions are superimposed to a background velocity model constrained by the dispersion velocities. In our first year of this project we will (i) generate 1D velocity models for open broadband seismic stations

  14. Modelling of two-phase flow based on separation of the flow according to velocity

    International Nuclear Information System (INIS)

    Narumo, T.

    1997-01-01

    The thesis concentrates on the development work of a physical one-dimensional two-fluid model that is based on Separation of the Flow According to Velocity (SFAV). The conventional way to model one-dimensional two-phase flow is to derive conservation equations for mass, momentum and energy over the regions occupied by the phases. In the SFAV approach, the two-phase mixture is divided into two subflows, with as distinct average velocities as possible, and momentum conservation equations are derived over their domains. Mass and energy conservation are treated equally with the conventional model because they are distributed very accurately according to the phases, but momentum fluctuations follow better the flow velocity. Submodels for non-uniform transverse profile of velocity and density, slip between the phases within each subflow and turbulence between the subflows have been derived. The model system is hyperbolic in any sensible flow conditions over the whole range of void fraction. Thus, it can be solved with accurate numerical methods utilizing the characteristics. The characteristics agree well with the used experimental data on two-phase flow wave phenomena Furthermore, the characteristics of the SFAV model are as well in accordance with their physical counterparts as of the best virtual-mass models that are typically optimized for special flow regimes like bubbly flow. The SFAV model has proved to be applicable in describing two-phase flow physically correctly because both the dynamics and steady-state behaviour of the model has been considered and found to agree well with experimental data This makes the SFAV model especially suitable for the calculation of fast transients, taking place in versatile form e.g. in nuclear reactors

  15. Calculation of pressure gradients from MR velocity data in a laminar flow model

    International Nuclear Information System (INIS)

    Adler, R.S.; Chenevert, T.L.; Fowlkes, J.B.; Pipe, J.G.; Rubin, J.M.

    1990-01-01

    This paper reports on the ability of current imaging modalities to provide velocity-distribution data that offers the possibility of noninvasive pressure-gradient determination from an appropriate rheologic model of flow. A simple laminar flow model is considered at low Reynolds number, RE calc = 0.59 + (1.13 x (dp/dz) meas ), R 2 = .994, in units of dyne/cm 2 /cm for the range of flows considered. The authors' results indicate the potential usefulness of noninvasive pressure-gradient determinations from quantitative analysis of imaging-derived velocity data

  16. Seismic Tomography and the Development of a State Velocity Profile

    Science.gov (United States)

    Marsh, S. J.; Nakata, N.

    2017-12-01

    Earthquakes have been a growing concern in the State of Oklahoma in the last few years and as a result, accurate earthquake location is of utmost importance. This means using a high resolution velocity model with both lateral and vertical variations. Velocity data is determined using ambient noise seismic interferometry and tomography. Passive seismic data was acquired from multiple IRIS networks over the span of eight years (2009-2016) and filtered for earthquake removal to obtain the background ambient noise profile for the state. Seismic Interferometry is applied to simulate ray paths between stations, this is done with each possible station pair for highest resolution. Finally the method of seismic tomography is used to extract the velocity data and develop the state velocity map. The final velocity profile will be a compilation of different network analyses due to changing station availability from year to year. North-Central Oklahoma has a dense seismic network and has been operating for the past few years. The seismic stations are located here because this is the most seismically active region. Other parts of the state have not had consistent coverage from year to year, and as such a reliable and high resolution velocity profile cannot be determined from this network. However, the Transportable Array (TA) passed through Oklahoma in 2014 and provided a much wider and evenly spaced coverage. The goal of this study is to ultimately combine these two arrays over time, and provide a high quality velocity profile for the State of Oklahoma.

  17. Mean Velocity Prediction Information Feedback Strategy in Two-Route Systems under ATIS

    Directory of Open Access Journals (Sweden)

    Jianqiang Wang

    2015-02-01

    Full Text Available Feedback contents of previous information feedback strategies in advanced traveler information systems are almost real-time traffic information. Compared with real-time information, prediction traffic information obtained by a reliable and effective prediction algorithm has many undisputable advantages. In prediction information environment, a traveler is prone to making a more rational route-choice. For these considerations, a mean velocity prediction information feedback strategy (MVPFS is presented. The approach adopts the autoregressive-integrated moving average model (ARIMA to forecast short-term traffic flow. Furthermore, prediction results of mean velocity are taken as feedback contents and displayed on a variable message sign to guide travelers' route-choice. Meanwhile, discrete choice model (Logit model is selected to imitate more appropriately travelers' route-choice behavior. In order to investigate the performance of MVPFS, a cellular automaton model with ARIMA is adopted to simulate a two-route scenario. The simulation shows that such innovative prediction feedback strategy is feasible and efficient. Even more importantly, this study demonstrates the excellence of prediction feedback ideology.

  18. Reliability-cost models for the power switching devices of wind power converters

    DEFF Research Database (Denmark)

    Ma, Ke; Blaabjerg, Frede

    2012-01-01

    In order to satisfy the growing reliability requirements for the wind power converters with more cost-effective solution, the target of this paper is to establish a new reliability-cost model which can connect the relationship between reliability performances and corresponding semiconductor cost...... temperature mean value Tm and fluctuation amplitude ΔTj of power devices, are presented. With the proposed reliability-cost model, it is possible to enable future reliability-oriented design of the power switching devices for wind power converters, and also an evaluation benchmark for different wind power...... for power switching devices. First the conduction loss, switching loss as well as thermal impedance models of power switching devices (IGBT module) are related to the semiconductor chip number information respectively. Afterwards simplified analytical solutions, which can directly extract the junction...

  19. Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models

    International Nuclear Information System (INIS)

    Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.

    1987-01-01

    Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented

  20. 78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard

    Science.gov (United States)

    2013-07-29

    ...; Order No. 782] Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy... Analysis (MOD) Reliability Standard MOD- 028-2, submitted to the Commission for approval by the North... Organization. The Commission finds that the proposed Reliability Standard represents an improvement over the...

  1. Performance and Reliability of Bonded Interfaces for High-Temperature Packaging (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Devoto, D.

    2014-11-01

    The thermal performance and reliability of sintered-silver is being evaluated for power electronics packaging applications. This will be experimentally accomplished by the synthesis of large-area bonded interfaces between metalized substrates that will be subsequently subjected to thermal cycles. A finite element model of crack initiation and propagation in these bonded interfaces will allow for the interpretation of degradation rates by a crack-velocity (V)-stress intensity factor (K) analysis. The experiment is outlined, and the modeling approach is discussed.

  2. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  3. A model of the instantaneous pressure-velocity relationships of the neonatal cerebral circulation.

    Science.gov (United States)

    Panerai, R B; Coughtrey, H; Rennie, J M; Evans, D H

    1993-11-01

    The instantaneous relationship between arterial blood pressure (BP) and cerebral blood flow velocity (CBFV), measured with Doppler ultrasound in the anterior cerebral artery, is represented by a vascular waterfall model comprising vascular resistance, compliance, and critical closing pressure. One min recordings obtained from 61 low birth weight newborns were fitted to the model using a least-squares procedures with correction for the time delay between the BP and CBFV signals. A sensitivity analysis was performed to study the effects of low-pass filtering (LPF), cutoff frequency, and noise on the estimated parameters of the model. Results indicate excellent fitting of the model (F-test, p model parameters have a mean correlation coefficient of 0.94 with the measured flow velocity tracing (N = 232 epochs). The model developed can be useful for interpreting clinical findings and as a framework for research into cerebral autoregulation.

  4. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  5. Fuse Modeling for Reliability Study of Power Electronic Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Iannuzzo, Francesco; Blaabjerg, Frede

    2017-01-01

    This paper describes a comprehensive modeling approach on reliability of fuses used in power electronic circuits. When fuses are subjected to current pulses, cyclic temperature stress is introduced to the fuse element and will wear out the component. Furthermore, the fuse may be used in a large......, and rated voltage/current are opposed to shift in time to effect early breaking during the normal operation of the circuit. Therefore, in such cases, a reliable protection required for the other circuit components will not be achieved. The thermo-mechanical models, fatigue analysis and thermo...

  6. Reliable software systems via chains of object models with provably correct behavior

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This work addresses specification and design of reliable safety-critical systems, such as nuclear reactor control systems. Reliability concerns are addressed in complimentary fashion by different fields. Reliability engineers build software reliability models, etc. Safety engineers focus on prevention of potential harmful effects of systems on environment. Software/hardware correctness engineers focus on production of reliable systems on the basis of mathematical proofs. The authors think that correctness may be a crucial guiding issue in the development of reliable safety-critical systems. However, purely formal approaches are not adequate for the task, because they neglect the connection with the informal customer requirements. They alleviate that as follows. First, on the basis of the requirements, they build a model of the system interactions with the environment, where the system is viewed as a black box. They will provide foundations for automated tools which will (a) demonstrate to the customer that all of the scenarios of system behavior are presented in the model, (b) uncover scenarios not present in the requirements, and (c) uncover inconsistent scenarios. The developers will work with the customer until the black box model will not possess scenarios (b) and (c) above. Second, the authors will build a chain of several increasingly detailed models, where the first model is the black box model and the last model serves to automatically generated proved executable code. The behavior of each model will be proved to conform to the behavior of the previous one. They build each model as a cluster of interactive concurrent objects, thus they allow both top-down and bottom-up development

  7. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  8. Suppression of panel flutter of near-space aircraft based on non-probabilistic reliability theory

    Directory of Open Access Journals (Sweden)

    Ye-Wei Zhang

    2016-03-01

    Full Text Available The vibration active control of the composite panels with the uncertain parameters in the hypersonic flow is studied using the non-probabilistic reliability theory. Using the piezoelectric patches as active control actuators, dynamic equations of panel are established by finite element method and Hamilton’s principle. And the control model of panel with uncertain parameters is obtained. According to the non-probabilistic reliability index, and besides being based on H∞ robust control theory and non-probabilistic reliability theory, the non-probabilistic reliability performance function is given. Moreover, the relationships between the robust controller and H∞ performance index and reliability are established. Numerical results show that the control method under the influence of reliability, H∞ performance index, and approaching velocity is effective to the vibration suppression of panel in the whole interval of uncertain parameters.

  9. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  10. Access to the kinematic information for the velocity model determination by 3-D reflexion tomography; Acces a l'information cinematique pour la determination du modele de vitesse par tomographie de reflexion 3D

    Energy Technology Data Exchange (ETDEWEB)

    Broto, K.

    1999-04-01

    The access to a reliable image of the subsurface requires a kinematically correct velocity depth model.Reflection tomography allows to meet this requirement if a complete and coherent pre-stack kinematic database can be provided. However, in case of complex sub-surfaces, wave propagation may lead to hardly interpretable seismic events in the time data. The SMART method is a sequential method that relies on reflection tomography for updating the velocity model and on the pre-stack depth migrated domain for extracting kinematic information that is not readily accessible in the time domain. For determining 3-D subsurface velocity models in case of complex structures, we propose the seriated SMART 2-D method as an alternative to the currently inconceivable SMART 3-D method. In order to extract kinematic information from a 3-D pre-stack data set, we combine detours through the 2-D pre-stack depth domain for a number of selected lines of the studied 3-D survey and 3-D reflection tomography for updating the velocity model. The travel-times from the SMART method being independent of the velocity model used for passing through the pre-stack depth migrated domain, the access to 3-D travel-times is ensured, even if they have been obtained via a 2-D domain. Besides, we propose to build a kinematical guide for ensuring the coherency of the seriated 2-D pre-stack depth interpretations and the access to a complete 3-D pre-stack kinematic database when dealing with structures associated with 3-D wave propagation. We opt for a blocky representation of the velocity model in order to be able to cope with complex structures. This representation leads us to define specific methodological rules for carrying out the different steps of the seriated SMART 2-D method. We also define strategies, built from the analysis of first inversion results, for an efficient application of reflection tomography. Besides, we discuss the problem of uncertainties to be assigned to travel-times obtained

  11. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  12. Scale dependence of acoustic velocities. An experimental study

    Energy Technology Data Exchange (ETDEWEB)

    Gotusso, Angelamaria Pillitteri

    2001-06-01

    Reservoir and overburden data (e.g. seismic, sonic log and core data) are collected at different stages of field development, at different scales, and under different measurement conditions. A more precise reservoir characterization could be obtained by combining all the collected data. Reliable data may also be obtained from drill cuttings. This methodology can give data in quasi-real time, it is easily applicable, and cheap. It is then important, to understand the relationship between results obtained from measurements at different scales. In this Thesis acoustic velocities measured at several different laboratory scales are presented. This experimental study was made in order to give the base for the development of a model aiming to use/combine appropriately the data collected at different scales. The two main aspects analyzed are the experimental limitations due to the decrease in sample size and the significance of measurements in relation to material heterogeneities. Plexiglas, an isotropic, non-dispersive artificial material, with no expected scale effect, was used to evaluate the robustness of the measurement techniques. The results emphasize the importance of the wavelength used with respect to the sample length. If the sample length (L) is at least 5 time bigger than wavelength used ({lambda}), then the measured velocities do not depend on sample size. Leca stone, an artificial isotropic material containing spherical grains was used to evaluate the combined effects of technique, heterogeneities and sample length. The ratio between the scale of the heterogeneities and the sample length has to be taken in to account. In this case velocities increase with decreasing sample length when the ratio L/{lambda} is smaller than 10-15 and at the same time the ratio between sample length and grain size is greater than 10. Measurements on natural rocks demonstrate additional influence of grain mineralogy, shape and orientation. Firenzuola sandstone shows scale and

  13. Spectral calculations for pressure-velocity and pressure-strain correlations in homogeneous shear turbulence

    Science.gov (United States)

    Dutta, Kishore

    2018-02-01

    Theoretical analyses of pressure related turbulent statistics are vital for a reliable and accurate modeling of turbulence. In the inertial subrange of turbulent shear flow, pressure-velocity and pressure-strain correlations are affected by anisotropy imposed at large scales. Recently, Tsuji and Kaneda (2012 J. Fluid Mech. 694 50) performed a set of experiments on homogeneous shear flow, and estimated various one-dimensional pressure related spectra and the associated non-dimensional universal numbers. Here, starting from the governing Navier-Stokes dynamics for the fluctuating velocity field and assuming the anisotropy at inertial scales as a weak perturbation of an otherwise isotropic dynamics, we analytically derive the form of the pressure-velocity and pressure-strain correlations. The associated universal numbers are calculated using the well-known renormalization-group results, and are compared with the experimental estimates of Tsuji and Kaneda. Approximations involved in the perturbative calculations are discussed.

  14. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  15. Modelling of two-phase flow based on separation of the flow according to velocity

    Energy Technology Data Exchange (ETDEWEB)

    Narumo, T. [VTT Energy, Espoo (Finland). Nuclear Energy

    1997-12-31

    The thesis concentrates on the development work of a physical one-dimensional two-fluid model that is based on Separation of the Flow According to Velocity (SFAV). The conventional way to model one-dimensional two-phase flow is to derive conservation equations for mass, momentum and energy over the regions occupied by the phases. In the SFAV approach, the two-phase mixture is divided into two subflows, with as distinct average velocities as possible, and momentum conservation equations are derived over their domains. Mass and energy conservation are treated equally with the conventional model because they are distributed very accurately according to the phases, but momentum fluctuations follow better the flow velocity. Submodels for non-uniform transverse profile of velocity and density, slip between the phases within each subflow and turbulence between the subflows have been derived. The model system is hyperbolic in any sensible flow conditions over the whole range of void fraction. Thus, it can be solved with accurate numerical methods utilizing the characteristics. The characteristics agree well with the used experimental data on two-phase flow wave phenomena Furthermore, the characteristics of the SFAV model are as well in accordance with their physical counterparts as of the best virtual-mass models that are typically optimized for special flow regimes like bubbly flow. The SFAV model has proved to be applicable in describing two-phase flow physically correctly because both the dynamics and steady-state behaviour of the model has been considered and found to agree well with experimental data This makes the SFAV model especially suitable for the calculation of fast transients, taking place in versatile form e.g. in nuclear reactors. 45 refs. The thesis includes also five previous publications by author.

  16. Low-velocity Impact Response of a Nanocomposite Beam Using an Analytical Model

    Directory of Open Access Journals (Sweden)

    Mahdi Heydari Meybodi

    Full Text Available AbstractLow-velocity impact of a nanocomposite beam made of glass/epoxy reinforced with multi-wall carbon nanotubes and clay nanoparticles is investigated in this study. Exerting modified rule of mixture (MROM, the mechanical properties of nanocomposite including matrix, nanoparticles or multi-wall carbon nanotubes (MWCNT, and fiber are attained. In order to analyze the low-velocity impact, Euler-Bernoulli beam theory and Hertz's contact law are simultaneously employed to govern the equations of motion. Using Ritz's variational approximation method, a set of nonlinear equations in time domain are obtained, which are solved using a fourth order Runge-Kutta method. The effect of different parameters such as adding nanoparticles or MWCNT's on maximum contact force and energy absorption, stacking sequence, geometrical dimensions (i.e., length, width and height, and initial velocity of the impactor have been studied comprehensively on dynamic behavior of the nanocomposite beam. In addition, the result of analytical model is compared with Finite Element Modeling (FEM.The results reveal that the effect of nanoparticles on energy absorption is more considerable at higher impact energies.

  17. Consideration of some difficulties in migration velocity analysis; Migration velocity analysis no shomondai ni kansuru kento

    Energy Technology Data Exchange (ETDEWEB)

    Akama, K [Japan National Oil Corp., Tokyo (Japan). Technology Research Center; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)

    1997-10-22

    Concerning migration velocity analysis in the seismic exploration method, two typical techniques, out of velocity analysis techniques using residual moveout in the CIP gather, are verified. Deregowski`s method uses pre-stacking deep-level migration records for velocity analysis to obtain velocities free of spatial inconsistency and not dependent on the velocity structure. This method is very like the conventional DMO velocity analysis method and is easy to understand intuitively. In this method, however, error is apt to be aggravated in the process of obtaining the depth-sector velocity from the time-RMS velocity. Al-Yahya`s method formulates the moveout residual in the CIP gather. This assumes horizontal stratification and a small residual velocity, however, and fails to guarantee convergence in the case of a steep structure or a grave model error. In the updating of the velocity model, in addition, it has to maintain required accuracy and, at the same time, incorporate smoothing to ensure not to deteriorate high convergence. 2 refs., 5 figs.

  18. Spectral Velocity Estimation in the Transverse Direction

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    A method for estimating the velocity spectrum for a fully transverse flow at a beam-to-flow angle of 90is described. The approach is based on the transverse oscillation (TO) method, where an oscillation across the ultrasound beam is made during receive processing. A fourth-order estimator based...... on the correlation of the received signal is derived. A Fourier transform of the correlation signal yields the velocity spectrum. Performing the estimation for short data segments gives the velocity spectrum as a function of time as for ordinary spectrograms, and it also works for a beam-to-flow angle of 90...... estimation scheme can reliably find the spectrum at 90, where a traditional estimator yields zero velocity. Measurements have been conducted with the SARUS experimental scanner and a BK 8820e convex array transducer (BK Medical, Herlev, Denmark). A CompuFlow 1000 (Shelley Automation, Inc, Toronto, Canada...

  19. 3D Crustal Velocity Structure Model of the Middle-eastern North China Craton

    Science.gov (United States)

    Duan, Y.; Wang, F.; Lin, J.; Wei, Y.

    2017-12-01

    Lithosphere thinning and destruction in the middle-eastern North China Craton (NCC), a region susceptible to strong earthquakes, is one of the research hotspots in solid earth science. Up to 42 wide-angle reflection/refraction deep seismic sounding (DSS) profiles have been completed in the middle-eastern NCC, we collect all the 2D profiling results and perform gridding of the velocity and interface depth data, and build a 3D crustal velocity structure model for the middle-eastern NCC, named HBCrust1.0, using the Kriging interpolation method. In this model, four layers are divided by three interfaces: G is the interface between the sedimentary cover and crystalline crust, with velocities of 5.0-5.5 km/s above and 5.8-6.0 km/s below. C is the interface of the upper and lower crust, with velocity jump from 6.2-6.4 km/s to 6.5-6.6 km/s. M is the interface between the crust and upper mantle, with velocity 6.7-7.0 km/s at the crust bottom and 7.9-8.0 km/s on mantle top. Our results show that the first arrival time calculated from HBCust1.0 fit well with the observation. It also demonstrates that the upper crust is the main seismogenic layer, and the brittle-ductile transition occurs at depths near interface C. The depth of interface Moho varies beneath the source area of the Tangshan earth-quake, and a low-velocity structure is found to extend from the source area to the lower crust. Based on these observations, it can be inferred that stress accumulation responsible for the Tangshan earthquake may have been closely related to the migration and deformation of the mantle materials. Comparisons of the average velocities of the whole crust, the upper and the lower crust show that the average velocity of the lower crust under the central part of the North China Basin (NCB) in the east of the craton is obviously higher than the regional average, this high-velocity probably results from longterm underplating of the mantle magma. This research is founded by the Natural Science

  20. On reliability and maintenance modelling of ageing equipment in electric power systems

    International Nuclear Information System (INIS)

    Lindquist, Tommie

    2008-04-01

    Maintenance optimisation is essential to achieve cost-efficiency, availability and reliability of supply in electric power systems. The process of maintenance optimisation requires information about the costs of preventive and corrective maintenance, as well as the costs of failures borne by both electricity suppliers and customers. To calculate expected costs, information is needed about equipment reliability characteristics and the way in which maintenance affects equipment reliability. The aim of this Ph.D. work has been to develop equipment reliability models taking the effect of maintenance into account. The research has focussed on the interrelated areas of condition estimation, reliability modelling and maintenance modelling, which have been investigated in a number of case studies. In the area of condition estimation two methods to quantitatively estimate the condition of disconnector contacts have been developed, which utilise results from infrared thermography inspections and contact resistance measurements. The accuracy of these methods were investigated in two case studies. Reliability models have been developed and implemented for SF6 circuit-breakers, disconnector contacts and XLPE cables in three separate case studies. These models were formulated using both empirical and physical modelling approaches. To improve confidence in such models a Bayesian statistical method incorporating information from the equipment design process was also developed. This method was illustrated in a case study of SF6 circuit-breaker operating rods. Methods for quantifying the effect of maintenance on equipment condition and reliability have been investigated in case studies on disconnector contacts and SF6 circuit-breakers. The input required by these methods are condition measurements and historical failure and maintenance data, respectively. This research has demonstrated that the effect of maintenance on power system equipment may be quantified using available data

  1. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  2. INTERSESSION RELIABILITY OF UPPER EXTREMITY ISOKINETIC PUSH-PULL TESTING.

    Science.gov (United States)

    Riemann, Bryan L; Davis, Sarah E; Huet, Kevin; Davies, George J

    2016-02-01

    Based on the frequency pushing and pulling patterns are used in functional activities, there is a need to establish an objective method of quantifying the muscle performance characteristics associated with these motions, particularly during the later stages of rehabilitation as criteria for discharge. While isokinetic assessment offers an approach to quantifying muscle performance, little is known about closed kinetic chain (CKC) isokinetic testing of the upper extremity (UE). To determine the intersession reliability of isokinetic upper extremity measurement of pushing and pulling peak force and average power at slow (0.24 m/s), medium (0.43 m/s) and fast (0.61 m/s) velocities in healthy young adults. The secondary purpose was to compare pushing and pulling peak force (PF) and average power (AP) between the upper extremity limbs (dominant, non-dominant) across the three velocities. Twenty-four physically active men and women completed a test-retest (>96 hours) protocol in order to establish isokinetic UE CKC reliability of PF and AP during five maximal push and pull repetitions at three velocities. Both limb and speed orders were randomized between subjects. High test-retest relative reliability using intraclass correlation coefficients (ICC2, 1) were revealed for PF (.91-.97) and AP (.85-.95) across velocities, limbs and directions. PF typical error (% coefficient of variation) ranged from 6.1% to 11.3% while AP ranged from 9.9% to 26.7%. PF decreased significantly (p pushing were significantly greater than pulling at all velocities, however the push-pull differences in PF became less as velocity increased. There were no significant differences identified between the dominant and nondominant limbs. Isokinetically derived UE CKC push-pull PF and AP are reliable measures. The lack of limb differences in healthy normal participants suggests that clinicians can consider bilateral comparisons when interpreting test performance. The increase in pushing PF and

  3. A P-wave velocity model of the upper crust of the Sannio region (Southern Apennines, Italy

    Directory of Open Access Journals (Sweden)

    M. Cocco

    1998-06-01

    Full Text Available This paper describes the results of a seismic refraction profile conducted in October 1992 in the Sannio region, Southern Italy, to obtain a detailed P-wave velocity model of the upper crust. The profile, 75 km long, extended parallel to the Apenninic chain in a region frequently damaged in historical time by strong earthquakes. Six shots were fired at five sites and recorded by a number of seismic stations ranging from 41 to 71 with a spacing of 1-2 km along the recording line. We used a two-dimensional raytracing technique to model travel times and amplitudes of first and second arrivals. The obtained P-wave velocity model has a shallow structure with strong lateral variations in the southern portion of the profile. Near surface sediments of the Tertiary age are characterized by seismic velocities in the 3.0-4.1 km/s range. In the northern part of the profile these deposits overlie a layer with a velocity of 4.8 km/s that has been interpreted as a Mesozoic sedimentary succession. A high velocity body, corresponding to the limestones of the Western Carbonate Platform with a velocity of 6 km/s, characterizes the southernmost part of the profile at shallow depths. At a depth of about 4 km the model becomes laterally homogeneous showing a continuous layer with a thickness in the 3-4 km range and a velocity of 6 km/s corresponding to the Meso-Cenozoic limestone succession of the Apulia Carbonate Platform. This platform appears to be layered, as indicated by an increase in seismic velocity from 6 to 6.7 km/s at depths in the 6-8 km range, that has been interpreted as a lithological transition from limestones to Triassic dolomites and anhydrites of the Burano formation. A lower P-wave velocity of about 5.0-5.5 km/s is hypothesized at the bottom of the Apulia Platform at depths ranging from 10 km down to 12.5 km; these low velocities could be related to Permo-Triassic siliciclastic deposits of the Verrucano sequence drilled at the bottom of the Apulia

  4. The reliability of the Adelaide in-shoe foot model.

    Science.gov (United States)

    Bishop, Chris; Hillier, Susan; Thewlis, Dominic

    2017-07-01

    Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  6. High-velocity Penetration of Concrete Targets with Three Types of Projectiles: Experiments and Analysis

    Directory of Open Access Journals (Sweden)

    Shuang Zhang

    Full Text Available Abstract This study conducted high-velocity penetration experiments using conventional ogive-nose, double-ogive-nose, and grooved-tapered projectiles of approximately 2.5 kg and initial velocities between 1000 and 1360 m/s to penetrate or perforate concrete targets with unconfined compressive strengths of nominally 40MPa. The penetration performance data of these three types of projectiles with two different types of materials (i.e., AerMet100 and DT300 were obtained. The crater depth model considering both the projectile mass and the initial velocity was proposed based on the test results and a theoretical analysis. The penetration ability and the trajectory stability of these three projectile types were compared and analyzed accordingly. The results showed that, under these experimental conditions, the effects of these two different kinds of projectile materials on the penetration depth and mass erosion rate of projectile were not obvious. The existing models could not reflect the crater depths for projectiles of greater weights or higher velocities, whereas the new model established in this study was reliable. The double-ogive-nose has a certain effect of drag reduction. Thus, the double-ogive-nose projectile has a higher penetration ability than the conventional ogive-nose projectile. Meanwhile, the grooved-tapered projectile has a better trajectory stability, because the convex parts of tapered shank generated the restoring moment to stabilize the trajectory.

  7. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  8. Wind Farm Reliability Modelling Using Bayesian Networks and Semi-Markov Processes

    Directory of Open Access Journals (Sweden)

    Robert Adam Sobolewski

    2015-09-01

    Full Text Available Technical reliability plays an important role among factors affecting the power output of a wind farm. The reliability is determined by an internal collection grid topology and reliability of its electrical components, e.g. generators, transformers, cables, switch breakers, protective relays, and busbars. A wind farm reliability’s quantitative measure can be the probability distribution of combinations of operating and failed states of the farm’s wind turbines. The operating state of a wind turbine is its ability to generate power and to transfer it to an external power grid, which means the availability of the wind turbine and other equipment necessary for the power transfer to the external grid. This measure can be used for quantitative analysis of the impact of various wind farm topologies and the reliability of individual farm components on the farm reliability, and for determining the expected farm output power with consideration of the reliability. This knowledge may be useful in an analysis of power generation reliability in power systems. The paper presents probabilistic models that quantify the wind farm reliability taking into account the above-mentioned technical factors. To formulate the reliability models Bayesian networks and semi-Markov processes were used. Using Bayesian networks the wind farm structural reliability was mapped, as well as quantitative characteristics describing equipment reliability. To determine the characteristics semi-Markov processes were used. The paper presents an example calculation of: (i probability distribution of the combination of both operating and failed states of four wind turbines included in the wind farm, and (ii expected wind farm output power with consideration of its reliability.

  9. Shear-wave velocity models and seismic sources in Campanian volcanic areas: Vesuvius and Phlegraean fields

    Energy Technology Data Exchange (ETDEWEB)

    Guidarelli, M; Zille, A; Sarao, A [Dipartimento di Scienze della Terra, Universita degli Studi di Trieste, Trieste (Italy); Natale, M; Nunziata, C [Dipartimento di Geofisica e Vulcanologia, Universita di Napoli ' Federico II' , Napoli (Italy); Panza, G F [Dipartimento di Scienze della Terra, Universita degli Studi di Trieste, Trieste (Italy); Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)

    2006-12-15

    This chapter summarizes a comparative study of shear-wave velocity models and seismic sources in the Campanian volcanic areas of Vesuvius and Phlegraean Fields. These velocity models were obtained through the nonlinear inversion of surface-wave tomography data, using as a priori constraints the relevant information available in the literature. Local group velocity data were obtained by means of the frequency-time analysis for the time period between 0.3 and 2 s and were combined with the group velocity data for the time period between 10 and 35 s from the regional events located in the Italian peninsula and bordering areas and two station phase velocity data corresponding to the time period between 25 and 100 s. In order to invert Rayleigh wave dispersion curves, we applied the nonlinear inversion method called hedgehog and retrieved average models for the first 30-35 km of the lithosphere, with the lower part of the upper mantle being kept fixed on the basis of existing regional models. A feature that is common to the two volcanic areas is a low shear velocity layer which is centered at the depth of about 10 km, while on the outside of the cone and along a path in the northeastern part of the Vesuvius area this layer is absent. This low velocity can be associated with the presence of partial melting and, therefore, may represent a quite diffused crustal magma reservoir which is fed by a deeper one that is regional in character and located in the uppermost mantle. The study of seismic source in terms of the moment tensor is suitable for an investigation of physical processes within a volcano; indeed, its components, double couple, compensated linear vector dipole, and volumetric, can be related to the movements of magma and fluids within the volcanic system. Although for many recent earthquake events the percentage of double couple component is high, our results also show the presence of significant non-double couple components in both volcanic areas. (author)

  10. Development of Markov model of emergency diesel generator for dynamic reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Young Ho; Choi, Sun Yeong; Yang, Joon Eon [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-02-01

    The EDG (Emergency Diesal Generator) of nuclear power plant is one of the most important equipments in mitigating accidents. The FT (Fault Tree) method is widely used to assess the reliability of safety systems like an EDG in nuclear power plant. This method, however, has limitations in modeling dynamic features of safety systems exactly. We, hence, have developed a Markov model to represent the stochastic process of dynamic systems whose states change as time moves on. The Markov model enables us to develop a dynamic reliability model of EDG. This model can represent all possible states of EDG comparing to the FRANTIC code developed by U.S. NRC for the reliability analysis of standby systems. to access the regulation policy for test interval, we performed two simulations based on the generic data and plant specific data of YGN 3, respectively by using the developed model. We also estimate the effects of various repair rates and the fractions of starting failures by demand shock to the reliability of EDG. And finally, Aging effect is analyzed. (author). 23 refs., 19 figs., 9 tabs.

  11. A model for assessing human cognitive reliability in PRA studies

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Spurgin, A.J.; Lukic, Y.

    1985-01-01

    This paper summarizes the status of a research project sponsored by EPRI as part of the Probabilistic Risk Assessment (PRA) technology improvement program and conducted by NUS Corporation to develop a model of Human Cognitive Reliability (HCR). The model was synthesized from features identified in a review of existing models. The model development was based on the hypothesis that the key factors affecting crew response times are separable. The inputs to the model consist of key parameters the values of which can be determined by PRA analysts for each accident situation being assessed. The output is a set of curves which represent the probability of control room crew non-response as a function of time for different conditions affecting their performance. The non-response probability is then a contributor to the overall non-success of operating crews to achieve a functional objective identified in the PRA study. Simulator data and some small scale tests were utilized to illustrate the calibration of interim HCR model coefficients for different types of cognitive processing since the data were sparse. The model can potentially help PRA analysts make human reliability assessments more explicit. The model incorporates concepts from psychological models of human cognitive behavior, information from current collections of human reliability data sources and crew response time data from simulator training exercises

  12. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  13. Lithospheric structure of the Arabian Shield and Platform from complete regional waveform modelling and surface wave group velocities

    Science.gov (United States)

    Rodgers, Arthur J.; Walter, William R.; Mellors, Robert J.; Al-Amri, Abdullah M. S.; Zhang, Yu-Shen

    1999-09-01

    Regional seismic waveforms reveal significant differences in the structure of the Arabian Shield and the Arabian Platform. We estimate lithospheric velocity structure by modelling regional waveforms recorded by the 1995-1997 Saudi Arabian Temporary Broadband Deployment using a grid search scheme. We employ a new method whereby we narrow the waveform modelling grid search by first fitting the fundamental mode Love and Rayleigh wave group velocities. The group velocities constrain the average crustal thickness and velocities as well as the crustal velocity gradients. Because the group velocity fitting is computationally much faster than the synthetic seismogram calculation this method allows us to determine good average starting models quickly. Waveform fits of the Pn and Sn body wave arrivals constrain the mantle velocities. The resulting lithospheric structures indicate that the Arabian Platform has an average crustal thickness of 40 km, with relatively low crustal velocities (average crustal P- and S-wave velocities of 6.07 and 3.50 km s^-1 , respectively) without a strong velocity gradient. The Moho is shallower (36 km) and crustal velocities are 6 per cent higher (with a velocity increase with depth) for the Arabian Shield. Fast crustal velocities of the Arabian Shield result from a predominantly mafic composition in the lower crust. Lower velocities in the Arabian Platform crust indicate a bulk felsic composition, consistent with orogenesis of this former active margin. P- and S-wave velocities immediately below the Moho are slower in the Arabian Shield than in the Arabian Platform (7.9 and 4.30 km s^-1 , and 8.10 and 4.55 km s^-1 , respectively). This indicates that the Poisson's ratios for the uppermost mantle of the Arabian Shield and Platform are 0.29 and 0.27, respectively. The lower mantle velocities and higher Poisson's ratio beneath the Arabian Shield probably arise from a partially molten mantle associated with Red Sea spreading and continental

  14. Simultaneous travel time tomography for updating both velocity and reflector geometry in triangular/tetrahedral cell model

    Science.gov (United States)

    Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu

    2018-05-01

    To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.

  15. Overcoming some limitations of imprecise reliability models

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2011-01-01

    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time ...

  16. Optic-microwave mixing velocimeter for superhigh velocity measurement

    International Nuclear Information System (INIS)

    Weng Jidong; Wang Xiang; Tao Tianjiong; Liu Cangli; Tan Hua

    2011-01-01

    The phenomenon that a light beam reflected off a moving object experiences a Doppler shift in its frequency underlies practical interferometric techniques for remote velocity measurements, such as velocity interferometer system for any reflector (VISAR), displacement interferometer system for any reflector (DISAR), and photonic Doppler velocimetry (PDV). While VISAR velocimeters are often bewildered by the fringe loss upon high-acceleration dynamic process diagnosis, the optic-fiber velocimeters such as DISAR and PDV, on the other hand, are puzzled by high velocity measurement over 10 km/s, due to the demand for the high bandwidth digitizer. Here, we describe a new optic-microwave mixing velocimeter (OMV) for super-high velocity measurements. By using currently available commercial microwave products, we have constructed a simple, compact, and reliable OMV device, and have successfully obtained, with a digitizer of bandwidth 6 GH only, the precise velocity history of an aluminum flyer plate being accelerated up to 11.2 km/s in a three stage gas-gun experiment.

  17. Gas-hydrate concentration estimated from P- and S-wave velocities at the Mallik 2L-38 research well, Mackenzie Delta, Canada

    Science.gov (United States)

    Carcione, José M.; Gei, Davide

    2004-05-01

    We estimate the concentration of gas hydrate at the Mallik 2L-38 research site using P- and S-wave velocities obtained from well logging and vertical seismic profiles (VSP). The theoretical velocities are obtained from a generalization of Gassmann's modulus to three phases (rock frame, gas hydrate and fluid). The dry-rock moduli are estimated from the log profiles, in sections where the rock is assumed to be fully saturated with water. We obtain hydrate concentrations up to 75%, average values of 37% and 21% from the VSP P- and S-wave velocities, respectively, and 60% and 57% from the sonic-log P- and S-wave velocities, respectively. The above averages are similar to estimations obtained from hydrate dissociation modeling and Archie methods. The estimations based on the P-wave velocities are more reliable than those based on the S-wave velocities.

  18. Milgrom Relation Models for Spiral Galaxies from Two-Dimensional Velocity Maps

    OpenAIRE

    Barnes, Eric I.; Kosowsky, Arthur; Sellwood, Jerry A.

    2007-01-01

    Using two-dimensional velocity maps and I-band photometry, we have created mass models of 40 spiral galaxies using the Milgrom relation (the basis of modified Newtonian dynamics, or MOND) to complement previous work. A Bayesian technique is employed to compare several different dark matter halo models to Milgrom and Newtonian models. Pseudo-isothermal dark matter halos provide the best statistical fits to the data in a majority of cases, while the Milgrom relation generally provides good fits...

  19. Velocity Model for CO2 Sequestration in the Southeastern United States Atlantic Continental Margin

    Science.gov (United States)

    Ollmann, J.; Knapp, C. C.; Almutairi, K.; Almayahi, D.; Knapp, J. H.

    2017-12-01

    The sequestration of carbon dioxide (CO2) is emerging as a major player in offsetting anthropogenic greenhouse gas emissions. With 40% of the United States' anthropogenic CO2 emissions originating in the southeast, characterizing potential CO2 sequestration sites is vital to reducing the United States' emissions. The goal of this research project, funded by the Department of Energy (DOE), is to estimate the CO2 storage potential for the Southeastern United States Atlantic Continental Margin. Previous studies find storage potential in the Atlantic continental margin. Up to 16 Gt and 175 Gt of storage potential are estimated for the Upper Cretaceous and Lower Cretaceous formations, respectively. Considering 2.12 Mt of CO2 are emitted per year by the United States, substantial storage potential is present in the Southeastern United States Atlantic Continental Margin. In order to produce a time-depth relationship, a velocity model must be constructed. This velocity model is created using previously collected seismic reflection, refraction, and well data in the study area. Seismic reflection horizons were extrapolated using well log data from the COST GE-1 well. An interpolated seismic section was created using these seismic horizons. A velocity model will be made using P-wave velocities from seismic reflection data. Once the time-depth conversion is complete, the depths of stratigraphic units in the seismic refraction data will be compared to the newly assigned depths of the seismic horizons. With a lack of well control in the study area, the addition of stratigraphic unit depths from 171 seismic refraction recording stations provides adequate data to tie to the depths of picked seismic horizons. Using this velocity model, the seismic reflection data can be presented in depth in order to estimate the thickness and storage potential of CO2 reservoirs in the Southeastern United States Atlantic Continental Margin.

  20. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  1. Modeling reliability of power systems substations by using stochastic automata networks

    International Nuclear Information System (INIS)

    Šnipas, Mindaugas; Radziukynas, Virginijus; Valakevičius, Eimutis

    2017-01-01

    In this paper, stochastic automata networks (SANs) formalism to model reliability of power systems substations is applied. The proposed strategy allows reducing the size of state space of Markov chain model and simplifying system specification. Two case studies of standard configurations of substations are considered in detail. SAN models with different assumptions were created. SAN approach is compared with exact reliability calculation by using a minimal path set method. Modeling results showed that total independence of automata can be assumed for relatively small power systems substations with reliable equipment. In this case, the implementation of Markov chain model by a using SAN method is a relatively easy task. - Highlights: • We present the methodology to apply stochastic automata network formalism to create Markov chain models of power systems. • The stochastic automata network approach is combined with minimal path sets and structural functions. • Two models of substation configurations with different model assumptions are presented to illustrate the proposed methodology. • Modeling results of system with independent automata and functional transition rates are similar. • The conditions when total independence of automata can be assumed are addressed.

  2. Reliability modeling and analysis of smart power systems

    CERN Document Server

    Karki, Rajesh; Verma, Ajit Kumar

    2014-01-01

    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  3. Theory model and experiment research about the cognition reliability of nuclear power plant operators

    International Nuclear Information System (INIS)

    Fang Xiang; Zhao Bingquan

    2000-01-01

    In order to improve the reliability of NPP operation, the simulation research on the reliability of nuclear power plant operators is needed. Making use of simulator of nuclear power plant as research platform, and taking the present international reliability research model-human cognition reliability for reference, the part of the model is modified according to the actual status of Chinese nuclear power plant operators and the research model of Chinese nuclear power plant operators obtained based on two-parameter Weibull distribution. Experiments about the reliability of nuclear power plant operators are carried out using the two-parameter Weibull distribution research model. Compared with those in the world, the same results are achieved. The research would be beneficial to the operation safety of nuclear power plant

  4. A vorticity transport model to restore spatial gaps in velocity data

    Science.gov (United States)

    Ameli, Siavash; Shadden, Shawn

    2017-11-01

    Often measurements of velocity data do not have full spatial coverage in the probed domain or near boundaries. These gaps can be due to missing measurements or masked regions of corrupted data. These gaps confound interpretation, and are problematic when the data is used to compute Lagrangian or trajectory-based analyses. Various techniques have been proposed to overcome coverage limitations in velocity data such as unweighted least square fitting, empirical orthogonal function analysis, variational interpolation as well as boundary modal analysis. In this talk, we present a vorticity transport PDE to reconstruct regions of missing velocity vectors. The transport model involves both nonlinear anisotropic diffusion and advection. This approach is shown to preserve the main features of the flow even in cases of large gaps, and the reconstructed regions are continuous up to second order. We illustrate results for high-frequency radar (HFR) measurements of the ocean surface currents as this is a common application of limited coverage. We demonstrate that the error of the method is on the same order of the error of the original velocity data. In addition, we have developed a web-based gateway for data restoration, and we will demonstrate a practical application using available data. This work is supported by the NSF Grant No. 1520825.

  5. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  6. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  7. Horizontal and Vertical Velocities Derived from the IDS Contribution to ITRF2014, and Comparisons with Geophysical Models

    Science.gov (United States)

    Moreaux, G.; Lemoine, F. G.; Argus, D. F.; Santamaria-Gomez, A.; Willis, P.; Soudarin, L.; Gravelle, M.; Ferrage, P.

    2016-01-01

    In the context of the 2014 realization of the International Terrestrial Reference Frame (ITRF2014), the International DORIS Service (IDS) has delivered to the IERS a set of 1140 weekly SINEX files including station coordinates and Earth orientation parameters, covering the time period from 1993.0 to 2015.0. From this set of weekly SINEX files, the IDS Combination Center estimated a cumulative DORIS position and velocity solution to obtain mean horizontal and vertical motion of 160 stations at 71 DORIS sites. The main objective of this study is to validate the velocities of the DORIS sites by comparison with external models or time series. Horizontal velocities are compared with two recent global plate models (GEODVEL 2010 and NNR-MORVEL56). Prior to the comparisons, DORIS horizontal velocities were corrected for Global Isostatic Adjustment (GIA) from the ICE-6G (VM5a) model. For more than half of the sites, the DORIS horizontal velocities differ from the global plate models by less than 2-3 mm/yr. For five of the sites (Arequipa, Dionysos/Gavdos, Manila, Santiago) with horizontal velocity differences wrt these models larger than 10 mm/yr, comparisons with GNSS estimates show the veracity of the DORIS motions. Vertical motions from the DORIS cumulative solution are compared with the vertical velocities derived from the latest GPS cumulative solution over the time span 1995.0-2014.0 from the University of La Rochelle (ULR6) solution at 31 co-located DORIS-GPS sites. These two sets of vertical velocities show a correlation coefficient of 0.83. Vertical differences are larger than 2 mm/yr at 23 percent of the sites. At Thule the disagreement is explained by fine-tuned DORIS discontinuities in line with the mass variations of outlet glaciers. Furthermore, the time evolution of the vertical time series from the DORIS station in Thule show similar trends to the GRACE equivalent water height.

  8. Horizontal and vertical velocities derived from the IDS contribution to ITRF2014, and comparisons with geophysical models

    Science.gov (United States)

    Moreaux, G.; Lemoine, F. G.; Argus, D. F.; Santamaría-Gómez, A.; Willis, P.; Soudarin, L.; Gravelle, M.; Ferrage, P.

    2016-10-01

    In the context of the 2014 realization of the International Terrestrial Reference Frame, the International DORIS (Doppler Orbitography Radiopositioning Integrated by Satellite) Service (IDS) has delivered to the IERS a set of 1140 weekly SINEX files including station coordinates and Earth orientation parameters, covering the time period from 1993.0 to 2015.0. From this set of weekly SINEX files, the IDS combination centre estimated a cumulative DORIS position and velocity solution to obtain mean horizontal and vertical motion of 160 stations at 71 DORIS sites. The main objective of this study is to validate the velocities of the DORIS sites by comparison with external models or time-series. Horizontal velocities are compared with two recent global plate models (GEODVEL 2010 and NNR-MORVEL56). Prior to the comparisons, DORIS horizontal velocities were corrected for Global Isostatic Adjustment from the ICE-6G (VM5a) model. For more than half of the sites, the DORIS horizontal velocities differ from the global plate models by less than 2-3 mm yr-1. For five of the sites (Arequipa, Dionysos/Gavdos, Manila and Santiago) with horizontal velocity differences with respect to these models larger than 10 mm yr-1, comparisons with GNSS estimates show the veracity of the DORIS motions. Vertical motions from the DORIS cumulative solution are compared with the vertical velocities derived from the latest GPS cumulative solution over the time span 1995.0-2014.0 from the University of La Rochelle solution at 31 co-located DORIS-GPS sites. These two sets of vertical velocities show a correlation coefficient of 0.83. Vertical differences are larger than 2 mm yr-1 at 23 percent of the sites. At Thule, the disagreement is explained by fine-tuned DORIS discontinuities in line with the mass variations of outlet glaciers. Furthermore, the time evolution of the vertical time-series from the DORIS station in Thule show similar trends to the GRACE equivalent water height.

  9. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  10. Predicted and measured velocity distribution in a model heat exchanger

    International Nuclear Information System (INIS)

    Rhodes, D.B.; Carlucci, L.N.

    1984-01-01

    This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries

  11. Velocity Models of the Upper Mantle Beneath the MER, Somali Platform, and Ethiopian Highlands from Body Wave Tomography

    Science.gov (United States)

    Hariharan, A.; Keranen, K. M.; Alemayehu, S.; Ayele, A.; Bastow, I. D.; Eilon, Z.

    2016-12-01

    The Main Ethiopian Rift (MER) presents a unique opportunity to improve our understanding of an active continental rift. Here we use body wave tomography to generate compressional and shear wave velocity models of the region beneath the rift. The models help us understand the rifting process over the broader region around the MER, extending the geographic region beyond that captured in past studies. We use differential arrival times of body waves from teleseismic earthquakes and multi-channel cross correlation to generate travel time residuals relative to the global IASP91 1-d velocity model. The events used for the tomographic velocity model include 200 teleseismic earthquakes with moment magnitudes greater than 5.5 from our recent 2014-2016 deployment in combination with 200 earthquakes from the earlier EBSE and EAGLE deployments (Bastow et al. 2008). We use the finite-frequency tomography analysis of Schmandt et al. (2010), which uses a first Fresnel zone paraxial approximation to the Born theoretical kernel with spatial smoothing and model norm damping in an iterative LSQR algorithm. Results show a broad, slow region beneath the rift with a distinct low-velocity anomaly beneath the northwest shoulder. This robust and well-resolved low-velocity anomaly is visible at a range of depths beneath the Ethiopian plateau, within the footprint of Oligocene flood basalts, and near surface expressions of diking. We interpret this anomaly as a possible plume conduit, or a low-velocity finger rising from a deeper, larger plume. Within the rift, results are consistent with previous work, exhibiting rift segmentation and low-velocities beneath the rift valley.

  12. Velocity Deficits in the Wake of Model Lemon Shark Dorsal Fins Measured with Particle Image Velocimetry

    Science.gov (United States)

    Terry, K. N.; Turner, V.; Hackett, E.

    2017-12-01

    Aquatic animals' morphology provides inspiration for human technological developments, as their bodies have evolved and become adapted for efficient swimming. Lemon sharks exhibit a uniquely large second dorsal fin that is nearly the same size as the first fin, the hydrodynamic role of which is unknown. This experimental study looks at the drag forces on a scale model of the Lemon shark's unique two-fin configuration in comparison to drag forces on a more typical one-fin configuration. The experiments were performed in a recirculating water flume, where the wakes behind the scale models are measured using particle image velocimetry. The experiments are performed at three different flow speeds for both fin configurations. The measured instantaneous 2D distributions of the streamwise and wall-normal velocity components are ensemble averaged to generate streamwise velocity vertical profiles. In addition, velocity deficit profiles are computed from the difference between these mean streamwise velocity profiles and the free stream velocity, which is computed based on measured flow rates during the experiments. Results show that the mean velocities behind the fin and near the fin tip are smallest and increase as the streamwise distance from the fin tip increases. The magnitude of velocity deficits increases with increasing flow speed for both fin configurations, but at all flow speeds, the two-fin configurations generate larger velocity deficits than the one-fin configurations. Because the velocity deficit is directly proportional to the drag force, these results suggest that the two-fin configuration produces more drag.

  13. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    Science.gov (United States)

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  14. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  15. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  16. Probing dark energy models with extreme pairwise velocities of galaxy clusters from the DEUS-FUR simulations

    Science.gov (United States)

    Bouillot, Vincent R.; Alimi, Jean-Michel; Corasaniti, Pier-Stefano; Rasera, Yann

    2015-06-01

    Observations of colliding galaxy clusters with high relative velocity probe the tail of the halo pairwise velocity distribution with the potential of providing a powerful test of cosmology. As an example it has been argued that the discovery of the Bullet Cluster challenges standard Λ cold dark matter (ΛCDM) model predictions. Halo catalogues from N-body simulations have been used to estimate the probability of Bullet-like clusters. However, due to simulation volume effects previous studies had to rely on a Gaussian extrapolation of the pairwise velocity distribution to high velocities. Here, we perform a detail analysis using the halo catalogues from the Dark Energy Universe Simulation Full Universe Runs (DEUS-FUR), which enables us to resolve the high-velocity tail of the distribution and study its dependence on the halo mass definition, redshift and cosmology. Building upon these results, we estimate the probability of Bullet-like systems in the framework of Extreme Value Statistics. We show that the tail of extreme pairwise velocities significantly deviates from that of a Gaussian, moreover it carries an imprint of the underlying cosmology. We find the Bullet Cluster probability to be two orders of magnitude larger than previous estimates, thus easing the tension with the ΛCDM model. Finally, the comparison of the inferred probabilities for the different DEUS-FUR cosmologies suggests that observations of extreme interacting clusters can provide constraints on dark energy models complementary to standard cosmological tests.

  17. Microthrix parvicella abundance associates with activated sludge settling velocity and rheology - Quantifying and modelling filamentous bulking.

    Science.gov (United States)

    Wágner, Dorottya S; Ramin, Elham; Szabo, Peter; Dechesne, Arnaud; Plósz, Benedek Gy

    2015-07-01

    The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through altering the hindered settling velocity and yield stress parameter. Strikingly, this is not the case for Chloroflexi, occurring in more than double the abundance of M. parvicella, and forming filaments primarily protruding from the flocs. The transient and compression settling parameters show a comparably high variability, and no significant association with filamentous abundance. A two-dimensional, axi-symmetrical computational fluid dynamics (CFD) model was used to assess calibration scenarios to model filamentous bulking. Our results suggest that model predictions can significantly benefit from explicitly accounting for filamentous bulking by calibrating the hindered settling velocity function. Furthermore, accounting for the transient and compression settling velocity in the computational domain is crucial to improve model accuracy when modelling filamentous bulking. However, the case-specific calibration of transient and compression settling parameters as well as yield stress is not necessary, and an average parameter set - obtained under bulking and good settling

  18. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    International Nuclear Information System (INIS)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-01-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

  19. Modelling Reliability of Supply and Infrastructural Dependency in Energy Distribution Systems

    OpenAIRE

    Helseth, Arild

    2008-01-01

    This thesis presents methods and models for assessing reliability of supply and infrastructural dependency in energy distribution systems with multiple energy carriers. The three energy carriers of electric power, natural gas and district heating are considered. Models and methods for assessing reliability of supply in electric power systems are well documented, frequently applied in the industry and continuously being subject to research and improvement. On the contrary, there are compar...

  20. Efficient scattering-angle enrichment for a nonlinear inversion of the background and perturbations components of a velocity model

    KAUST Repository

    Wu, Zedong

    2017-07-04

    Reflection-waveform inversion (RWI) can help us reduce the nonlinearity of the standard full-waveform inversion (FWI) by inverting for the background velocity model using the wave-path of a single scattered wavefield to an image. However, current RWI implementations usually neglect the multi-scattered energy, which will cause some artifacts in the image and the update of the background. To improve existing RWI implementations in taking multi-scattered energy into consideration, we split the velocity model into background and perturbation components, integrate them directly in the wave equation, and formulate a new optimization problem for both components. In this case, the perturbed model is no longer a single-scattering model, but includes all scattering. Through introducing a new cheap implementation of scattering angle enrichment, the separation of the background and perturbation components can be implemented efficiently. We optimize both components simultaneously to produce updates to the velocity model that is nonlinear with respect to both the background and the perturbation. The newly introduced perturbation model can absorb the non-smooth update of the background in a more consistent way. We apply the proposed approach on the Marmousi model with data that contain frequencies starting from 5 Hz to show that this method can converge to an accurate velocity starting from a linearly increasing initial velocity. Also, our proposed method works well when applied to a field data set.

  1. A detonation model of high/low velocity detonation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Shaoming; Li, Chenfang; Ma, Yunhua; Cui, Junmin [Xian Modern Chemistry Research Institute, Xian, 710065 (China)

    2007-02-15

    A new detonation model that can simulate both high and low velocity detonations is established using the least action principle. The least action principle is valid for mechanics and thermodynamics associated with a detonation process. Therefore, the least action principle is valid in detonation science. In this model, thermodynamic equilibrium state is taken as the known final point of the detonation process. Thermodynamic potentials are analogous to mechanical ones, and the Lagrangian function in the detonation process is L=T-V. Under certain assumptions, the variation calculus of the Lagrangian function gives two solutions: the first one is a constant temperature solution, and the second one is the solution of an ordinary differential equation. A special solution of the ordinary differential equation is given. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  2. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  3. Upper mantle velocity structure beneath Italy from direct and secondary P-wave teleseismic tomography

    Directory of Open Access Journals (Sweden)

    P. De Gori

    1997-06-01

    Full Text Available High-quality teleseismic data digitally recorded by the National Seismic Network during 1988-1995 have been analysed to tomographically reconstruct the aspherical velocity structure of the upper mantle beneath the Italian region. To improve the quality and the reliability of the tomographic images, both direct (P, PKPdf and secondary (pP,sP,PcP,PP,PKPbc,PKPab travel-time data were used in the inversion. Over 7000 relative residuals were computed with respect to the IASP91 Earth velocity model and inverted using a modified version of the ACH technique. Incorporation of data of secondary phases resulted in a significant improvement of the sampling of the target volume and of the spatial resolution of the heterogeneous zones. The tomographic images show that most of the lateral variations in the velocity field are confined in the first ~250 km of depth. Strong low velocity anomalies are found beneath the Po plain, Tuscany and Eastern Sicily in the depth range between 35 and 85 km. High velocity anomalies dominate the upper mantle beneath the Central-Western Alps, Northern-Central Apennines and Southern Tyrrhenian sea at lithospheric depths between 85 and 150 km. At greater depth, positive anomalies are still observed below the northernmost part of the Apenninic chain and Southern Tyrrhenian sea. Deeper anomalies present in the 3D velocity model computed by inverting only the first arrivals dataset, generally appear less pronounced in the new tomographic reconstructions. We interpret this as the result of the ray sampling improvement on the reduction of the vertical smearing effects.

  4. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    Science.gov (United States)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  5. Radial velocity asymmetries from jets with variable velocity profiles

    International Nuclear Information System (INIS)

    Cerqueira, A. H.; Vasconcelos, M. J.; Velazquez, P. F.; Raga, A. C.; De Colle, F.

    2006-01-01

    We have computed a set of 3-D numerical simulations of radiatively cooling jets including variabilities in both the ejection direction (precession) and the jet velocity (intermittence), using the Yguazu-a code. In order to investigate the effects of jet rotation on the shape of the line profiles, we also introduce an initial toroidal rotation velocity profile. Since the Yguazu-a code includes an atomic/ionic network, we are able to compute the emission coefficients for several emission lines, and we generate line profiles for the Hα, [O I]λ6300, [S II]λ6716 and [N II]λ6548 lines. Using initial parameters that are suitable for the DG Tau microjet, we show that the computed radial velocity shift for the medium-velocity component of the line profile as a function of distance from the jet axis is strikingly similar for rotating and non-rotating jet models

  6. Reliability and Validity Assessment of a Linear Position Transducer

    Directory of Open Access Journals (Sweden)

    Manuel V. Garnacho-Castaño

    2015-03-01

    Full Text Available The objectives of the study were to determine the validity and reliability of peak velocity (PV, average velocity (AV, peak power (PP and average power (AP measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain during two resistance exercises, bench press (BP and full back squat (BS, performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2. Intraclass correlation coefficients (ICCs indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W. Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W. Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP make this device a useful tool for monitoring resistance training.

  7. Small velocity and finite temperature variations in kinetic relaxation models

    KAUST Repository

    Markowich, Peter; Jü ngel, Ansgar; Aoki, Kazuo

    2010-01-01

    A small Knuden number analysis of a kinetic equation in the diffusive scaling is performed. The collision kernel is of BGK type with a general local Gibbs state. Assuming that the flow velocity is of the order of the Knudsen number, a Hilbert expansion yields a macroscopic model with finite temperature variations, whose complexity lies in between the hydrodynamic and the energy-transport equations. Its mathematical structure is explored and macroscopic models for specific examples of the global Gibbs state are presented. © American Institute of Mathematical Sciences.

  8. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  9. Centralized Bayesian reliability modelling with sensor networks

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 19, č. 5 (2013), s. 471-482 ISSN 1387-3954 R&D Projects: GA MŠk 7D12004 Grant - others:GA MŠk(CZ) SVV-265315 Keywords : Bayesian modelling * Sensor network * Reliability Subject RIV: BD - Theory of Information Impact factor: 0.984, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0392551.pdf

  10. Maintenance overtime policies in reliability theory models with random working cycles

    CERN Document Server

    Nakagawa, Toshio

    2015-01-01

    This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...

  11. Double path-integral migration velocity analysis: a real data example

    International Nuclear Information System (INIS)

    Costa, Jessé C; Schleicher, Jörg

    2011-01-01

    Path-integral imaging forms an image with no knowledge of the velocity model by summing over the migrated images obtained for a set of migration velocity models. Double path-integral imaging migration extracts the stationary velocities, i.e. those velocities at which common-image gathers align horizontally, as a byproduct. An application of the technique to a real data set demonstrates that quantitative information about the time migration velocity model can be determined by double path-integral migration velocity analysis. Migrated images using interpolations with different regularizations of the extracted velocities prove the high quality of the resulting time-migration velocity information. The so-obtained velocity model can then be used as a starting model for subsequent velocity analysis tools like migration tomography or other tomographic methods

  12. On new cautious structural reliability models in the framework of imprecise probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev; Kozine, Igor

    2010-01-01

    measures when the number of events of interest or observations is very small. The main feature of the models is that prior ignorance is not modelled by a fixed single prior distribution, but by a class of priors which is defined by upper and lower probabilities that can converge as statistical data......New imprecise structural reliability models are described in this paper. They are developed based on the imprecise Bayesian inference and are imprecise Dirichlet, imprecise negative binomial, gamma-exponential and normal models. The models are applied to computing cautious structural reliability...

  13. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  14. Models of Information Security Highly Reliable Computing Systems

    Directory of Open Access Journals (Sweden)

    Vsevolod Ozirisovich Chukanov

    2016-03-01

    Full Text Available Methods of the combined reservation are considered. The models of reliability of systems considering parameters of restoration and prevention of blocks of system are described. Ratios for average quantity prevention and an availability quotient of blocks of system are given.

  15. Prerequisites for Accurate Monitoring of River Discharge Based on Fixed-Location Velocity Measurements

    Science.gov (United States)

    Kästner, K.; Hoitink, A. J. F.; Torfs, P. J. J. F.; Vermeulen, B.; Ningsih, N. S.; Pramulya, M.

    2018-02-01

    River discharge has to be monitored reliably for effective water management. As river discharge cannot be measured directly, it is usually inferred from the water level. This practice is unreliable at places where the relation between water level and flow velocity is ambiguous. In such a case, the continuous measurement of the flow velocity can improve the discharge prediction. The emergence of horizontal acoustic Doppler current profilers (HADCPs) has made it possible to continuously measure the flow velocity. However, the profiling range of HADCPs is limited, so that a single instrument can only partially cover a wide cross section. The total discharge still has to be determined with a model. While the limitations of rating curves are well understood, there is not yet a comprehensive theory to assess the accuracy of discharge predicted from velocity measurements. Such a theory is necessary to discriminate which factors influence the measurements, and to improve instrument deployment as well as discharge prediction. This paper presents a generic method to assess the uncertainty of discharge predicted from range-limited velocity profiles. The theory shows that a major source of error is the variation of the ratio between the local and cross-section-averaged velocity. This variation is large near the banks, where HADCPs are usually deployed and can limit the advantage gained from the velocity measurement. We apply our theory at two gauging stations situated in the Kapuas River, Indonesia. We find that at one of the two stations the index velocity does not outperform a simple rating curve.

  16. Uncertainty estimation of the velocity model for the TrigNet GPS network

    Science.gov (United States)

    Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard

    2010-05-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.

  17. An empirical velocity scale relation for modelling a design of large mesh pelagic trawl

    NARCIS (Netherlands)

    Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.

    1996-01-01

    Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is

  18. Reliability Measure Model for Assistive Care Loop Framework Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Venki Balasubramanian

    2010-01-01

    Full Text Available Body area wireless sensor networks (BAWSNs are time-critical systems that rely on the collective data of a group of sensor nodes. Reliable data received at the sink is based on the collective data provided by all the source sensor nodes and not on individual data. Unlike conventional reliability, the definition of retransmission is inapplicable in a BAWSN and would only lead to an elapsed data arrival that is not acceptable for time-critical application. Time-driven applications require high data reliability to maintain detection and responses. Hence, the transmission reliability for the BAWSN should be based on the critical time. In this paper, we develop a theoretical model to measure a BAWSN's transmission reliability, based on the critical time. The proposed model is evaluated through simulation and then compared with the experimental results conducted in our existing Active Care Loop Framework (ACLF. We further show the effect of the sink buffer in transmission reliability after a detailed study of various other co-existing parameters.

  19. Engineering model for low-velocity impacts of multi-material cylinder on a rigid boundary

    Directory of Open Access Journals (Sweden)

    Delvare F.

    2012-08-01

    Full Text Available Modern ballistic problems involve the impact of multi-material projectiles. In order to model the impact phenomenon, different levels of analysis can be developed: empirical, engineering and simulation models. Engineering models are important because they allow the understanding of the physical phenomenon of the impact materials. However, some simplifications can be assumed to reduce the number of variables. For example, some engineering models have been developed to approximate the behavior of single cylinders when impacts a rigid surface. However, the cylinder deformation depends of its instantaneous velocity. At this work, an analytical model is proposed for modeling the behavior of a unique cylinder composed of two different metals cylinders over a rigid surface. Material models are assumed as rigid-perfectly plastic. Differential equation systems are solved using a numerical Runge-Kutta method. Results are compared with computational simulations using AUTODYN 2D hydrocode. It was found a good agreement between engineering model and simulation results. Model is limited by the impact velocity which is transition at the interface point given by the hydro dynamical pressure proposed by Tate.

  20. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  1. Accelerated radial Fourier-velocity encoding using compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    Hilbert, Fabian; Han, Dietbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wech, Tobias; Koestler, Herbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wuerzburg Univ. (Germany). Comprehensive Heart Failure Center (CHFC)

    2014-10-01

    Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity

  2. Accelerated radial Fourier-velocity encoding using compressed sensing

    International Nuclear Information System (INIS)

    Hilbert, Fabian; Han, Dietbert

    2014-01-01

    Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity

  3. Accelerated radial Fourier-velocity encoding using compressed sensing.

    Science.gov (United States)

    Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert

    2014-09-01

    Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus

  4. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Science.gov (United States)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  5. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Directory of Open Access Journals (Sweden)

    Shahoo Maleki

    2014-06-01

    Full Text Available Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR and Back-Propagation Neural Network (BPNN. Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  6. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  7. Effects of Intraluminal Thrombus on Patient-Specific Abdominal Aortic Aneurysm Hemodynamics via Stereoscopic Particle Image Velocity and Computational Fluid Dynamics Modeling

    Science.gov (United States)

    Chen, Chia-Yuan; Antón, Raúl; Hung, Ming-yang; Menon, Prahlad; Finol, Ender A.; Pekkan, Kerem

    2014-01-01

    The pathology of the human abdominal aortic aneurysm (AAA) and its relationship to the later complication of intraluminal thrombus (ILT) formation remains unclear. The hemodynamics in the diseased abdominal aorta are hypothesized to be a key contributor to the formation and growth of ILT. The objective of this investigation is to establish a reliable 3D flow visualization method with corresponding validation tests with high confidence in order to provide insight into the basic hemodynamic features for a better understanding of hemodynamics in AAA pathology and seek potential treatment for AAA diseases. A stereoscopic particle image velocity (PIV) experiment was conducted using transparent patient-specific experimental AAA models (with and without ILT) at three axial planes. Results show that before ILT formation, a 3D vortex was generated in the AAA phantom. This geometry-related vortex was not observed after the formation of ILT, indicating its possible role in the subsequent appearance of ILT in this patient. It may indicate that a longer residence time of recirculated blood flow in the aortic lumen due to this vortex caused sufficient shear-induced platelet activation to develop ILT and maintain uniform flow conditions. Additionally, two computational fluid dynamics (CFD) modeling codes (Fluent and an in-house cardiovascular CFD code) were compared with the two-dimensional, three-component velocity stereoscopic PIV data. Results showed that correlation coefficients of the out-of-plane velocity data between PIV and both CFD methods are greater than 0.85, demonstrating good quantitative agreement. The stereoscopic PIV study can be utilized as test case templates for ongoing efforts in cardiovascular CFD solver development. Likewise, it is envisaged that the patient-specific data may provide a benchmark for further studying hemodynamics of actual AAA, ILT, and their convolution effects under physiological conditions for clinical applications. PMID:24316984

  8. A generalized formulation for noise-based seismic velocity change measurements

    Science.gov (United States)

    Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N.; Droznin, D.; Droznina, S.; Senyukov, S.; Gordeev, E.

    2017-12-01

    The observation of continuous seismic velocity changes is a powerful tool for detecting seasonal variations in crustal structure, volcanic unrest, co- and post-seismic evolution of stress in fault areas or the effects of fluid injection. The standard approach for measuring such velocity changes relies on comparison of travel times in the coda of a set of seismic signals, usually noise-based cross-correlations retrieved at different dates, and a reference trace, usually a averaged function over dates. A good stability in both space and time of the noise sources is then the main assumption for reliable measurements. Unfortunately, these conditions are often not fulfilled, as it happens when ambient-noise sources are non-stationary, such as the emissions of low-frequency volcanic tremors.We propose a generalized formulation for retrieving continuous time series of noise-based seismic velocity changes without any arbitrary reference cross-correlation function. We set up a general framework for future applications of this technique performing synthetic tests. In particular, we study the reliability of the retrieved velocity changes in case of seasonal-type trends, transient effects (similar to those produced as a result of an earthquake or a volcanic eruption) and sudden velocity drops and recoveries as the effects of transient local source emissions. Finally, we apply this approach to a real dataset of noise cross-correlations. We choose the Klyuchevskoy volcanic group (Kamchatka) as a case study where the recorded wavefield is hampered by loss of data and dominated by strongly localized volcanic tremor sources. Despite the mentioned wavefield contaminations, we retrieve clear seismic velocity drops associated with the eruptions of the Klyuchevskoy an the Tolbachik volcanoes in 2010 and 2012, respectively.

  9. Critique of the use of deposition velocity in modeling indoor air quality

    International Nuclear Information System (INIS)

    Nazaroff, W.W.; Weschler, C.J.

    1993-01-01

    Among the potential fates of indoor air pollutants are a variety of physical and chemical interactions with indoor surfaces. In deterministic mathematical models of indoor air quality, these interactions are usually represented as a first-order loss process, with the loss rate coefficient given as the product of the surface-to-volume ratio of the room times a deposition velocity. In this paper, the validity of this representation of surface-loss mechanisms is critically evaluated. From a theoretical perspective, the idea of a deposition velocity is consistent with the following representation of an indoor air environment. Pollutants are well-mixed throughout a core region which is separated from room surfaces by boundary layers. Pollutants migrate through the boundary layers by a combination of diffusion (random motion resulting from collisions with surrounding gas molecules), advection (transport by net motion of the fluid), and, in some cases, other transport mechanisms. The rate of pollutant loss to a surface is governed by a combination of the rate of transport through the boundary layer and the rate of reaction at the surface. The deposition velocity expresses the pollutant flux density (mass or moles deposited per area per time) to the surface divided by the pollutant concentration in the core region. This concept has substantial value to the extent that the flux density is proportional to core concentration. Published results from experimental and modeling studies of fine particles, radon decay products, ozone, and nitrogen oxides are used as illustrations of both the strengths and weaknesses of deposition velocity as a parameter to indicate the rate of indoor air pollutant loss on surfaces. 66 refs., 5 tabs

  10. Regional travel-time residual studies and station correction from 1-D velocity models for some stations around Peninsular Malaysia and Singapore

    Science.gov (United States)

    Osagie, Abel U.; Nawawi, Mohd.; Khalil, Amin Esmail; Abdullah, Khiruddin

    2017-06-01

    can compensate for heterogeneous velocity structure near individual stations. The computed average travel-time residuals can reduce errors attributable to station correction in the inversion of hypocentral parameters around the Peninsula. Due to the heterogeneity occasioned by the numerous fault systems, a better 1-D velocity model for the Peninsula is desired for more reliable hypocentral inversion and other seismic investigations.

  11. Intra-observer reliability and agreement of manual and digital orthodontic model analysis.

    Science.gov (United States)

    Koretsi, Vasiliki; Tingelhoff, Linda; Proff, Peter; Kirschneck, Christian

    2018-01-23

    Digital orthodontic model analysis is gaining acceptance in orthodontics, but its reliability is dependent on the digitalisation hardware and software used. We thus investigated intra-observer reliability and agreement / conformity of a particular digital model analysis work-flow in relation to traditional manual plaster model analysis. Forty-eight plaster casts of the upper/lower dentition were collected. Virtual models were obtained with orthoX®scan (Dentaurum) and analysed with ivoris®analyze3D (Computer konkret). Manual model analyses were done with a dial caliper (0.1 mm). Common parameters were measured on each plaster cast and its virtual counterpart five times each by an experienced observer. We assessed intra-observer reliability within method (ICC), agreement/conformity between methods (Bland-Altman analyses and Lin's concordance correlation), and changing bias (regression analyses). Intra-observer reliability was substantial within each method (ICC ≥ 0.7), except for five manual outcomes (12.8 per cent). Bias between methods was statistically significant, but less than 0.5 mm for 87.2 per cent of the outcomes. In general, larger tooth sizes were measured digitally. Total difference maxilla and mandible had wide limits of agreement (-3.25/6.15 and -2.31/4.57 mm), but bias between methods was mostly smaller than intra-observer variation within each method with substantial conformity of manual and digital measurements in general. No changing bias was detected. Although both work-flows were reliable, the investigated digital work-flow proved to be more reliable and yielded on average larger tooth sizes. Averaged differences between methods were within 0.5 mm for directly measured outcomes but wide ranges are expected for some computed space parameters due to cumulative error. © The Author 2017. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  12. Age-dependent reliability model considering effects of maintenance and working conditions

    International Nuclear Information System (INIS)

    Martorell, Sebastian; Sanchez, Ana; Serradell, Vicente

    1999-01-01

    Nowadays, there is some doubt about building new nuclear power plants (NPPs). Instead, there is a growing interest in analyzing the possibility to extend current NPP operation, where life management programs play an important role. The evolution of the NPP safety depends on the evolution of the reliability of its safety components, which, in turn, is a function of their age along the NPP operational life. In this paper, a new age-dependent reliability model is presented, which includes parameters related to surveillance and maintenance effectiveness and working conditions of the equipment, both environmental and operational. This model may be used to support NPP life management and life extension programs, by improving or optimizing surveillance and maintenance tasks using risk and cost models based on such an age-dependent reliability model. The results of the sensitivity study in the example application show that the selection of the most appropriate maintenance strategy would directly depend on the previous parameters. Then, very important differences are expected to appear under certain circumstances, particularly, in comparison with other models that do not consider maintenance effectiveness and working conditions simultaneously

  13. Rayleigh wave group velocity and shear wave velocity structure in the San Francisco Bay region from ambient noise tomography

    Science.gov (United States)

    Li, Peng; Thurber, Clifford

    2018-06-01

    We derive new Rayleigh wave group velocity models and a 3-D shear wave velocity model of the upper crust in the San Francisco Bay region using an adaptive grid ambient noise tomography algorithm and 6 months of continuous seismic data from 174 seismic stations from multiple networks. The resolution of the group velocity models is 0.1°-0.2° for short periods (˜3 s) and 0.3°-0.4° for long periods (˜10 s). The new shear wave velocity model of the upper crust reveals a number of important structures. We find distinct velocity contrasts at the Golden Gate segment of the San Andreas Fault, the West Napa Fault, central part of the Hayward Fault and southern part of the Calaveras Fault. Low shear wave velocities are mainly located in Tertiary and Quaternary basins, for instance, La Honda Basin, Livermore Valley and the western and eastern edges of Santa Clara Valley. Low shear wave velocities are also observed at the Sonoma volcanic field. Areas of high shear wave velocity include the Santa Lucia Range, the Gabilan Range and Ben Lomond Plutons, and the Diablo Range, where Franciscan Complex or Silinian rocks are exposed.

  14. Software reliability growth model for safety systems of nuclear reactor

    International Nuclear Information System (INIS)

    Thirugnana Murthy, D.; Murali, N.; Sridevi, T.; Satya Murty, S.A.V.; Velusamy, K.

    2014-01-01

    The demand for complex software systems has increased more rapidly than the ability to design, implement, test, and maintain them, and the reliability of software systems has become a major concern for our, modern society.Software failures have impaired several high visibility programs in space, telecommunications, defense and health industries. Besides the costs involved, it setback the projects. The ways of quantifying it and using it for improvement and control of the software development and maintenance process. This paper discusses need for systematic approaches for measuring and assuring software reliability which is a major share of project development resources. It covers the reliability models with the concern on 'Reliability Growth'. It includes data collection on reliability, statistical estimation and prediction, metrics and attributes of product architecture, design, software development, and the operational environment. Besides its use for operational decisions like deployment, it includes guiding software architecture, development, testing and verification and validation. (author)

  15. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  16. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  17. Inter-arch digital model vs. manual cast measurements: Accuracy and reliability.

    Science.gov (United States)

    Kiviahde, Heikki; Bukovac, Lea; Jussila, Päivi; Pesonen, Paula; Sipilä, Kirsi; Raustia, Aune; Pirttiniemi, Pertti

    2017-06-28

    The purpose of this study was to evaluate the accuracy and reliability of inter-arch measurements using digital dental models and conventional dental casts. Thirty sets of dental casts with permanent dentition were examined. Manual measurements were done with a digital caliper directly on the dental casts, and digital measurements were made on 3D models by two independent examiners. Intra-class correlation coefficients (ICC), a paired sample t-test or Wilcoxon signed-rank test, and Bland-Altman plots were used to evaluate intra- and inter-examiner error and to determine the accuracy and reliability of the measurements. The ICC values were generally good for manual and excellent for digital measurements. The Bland-Altman plots of all the measurements showed good agreement between the manual and digital methods and excellent inter-examiner agreement using the digital method. Inter-arch occlusal measurements on digital models are accurate and reliable and are superior to manual measurements.

  18. Defect evolution in cosmology and condensed matter quantitative analysis with the velocity-dependent one-scale model

    CERN Document Server

    Martins, C J A P

    2016-01-01

    This book sheds new light on topological defects in widely differing systems, using the Velocity-Dependent One-Scale Model to better understand their evolution. Topological defects – cosmic strings, monopoles, domain walls or others - necessarily form at cosmological (and condensed matter) phase transitions. If they are stable and long-lived they will be fossil relics of higher-energy physics. Understanding their behaviour and consequences is a key part of any serious attempt to understand the universe, and this requires modelling their evolution. The velocity-dependent one-scale model is the only fully quantitative model of defect network evolution, and the canonical model in the field. This book provides a review of the model, explaining its physical content and describing its broad range of applicability.

  19. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  20. Practical applications of age-dependent reliability models and analysis of operational data

    International Nuclear Information System (INIS)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L.

    2005-01-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems

  1. Non-invasive aortic systolic pressure and pulse wave velocity estimation in a primary care setting: An in silico study.

    Science.gov (United States)

    Guala, Andrea; Camporeale, Carlo; Ridolfi, Luca; Mesin, Luca

    2017-04-01

    Everyday clinical cardiovascular evaluation is still largely based on brachial systolic and diastolic pressures. However, several clinical studies have demonstrated the higher diagnostic capacities of the aortic pressure, as well as the need to assess the aortic mechanical properties (e.g., by measuring the aortic pulse wave velocity). In order to fill this gap, we propose to exploit a set of easy-to-obtain physical characteristics to estimate the aortic pressure and pulse wave velocity. To this aim, a large population of virtual subjects is created by a validated mathematical model of the cardiovascular system. Quadratic regressive models are then fitted and statistically selected in order to obtain reliable estimations of the aortic pressure and pulse wave velocity starting from the knowledge of the subject age, height, weight, brachial pressure, photoplethysmographic measures and either electrocardiogram or phonocardiogram. The results are very encouraging and foster clinical studies aiming to apply a similar technique to a real population. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Measurement of velocity deficit at the downstream of a 1:10 axial hydrokinetic turbine model

    Energy Technology Data Exchange (ETDEWEB)

    Gunawan, Budi [ORNL; Neary, Vincent S [ORNL; Hill, Craig [St. Anthony Falls Laboratory, 2 Third Avenue SE, Minneapolis, MN 55414; Chamorro, Leonardo [St. Anthony Falls Laboratory, 2 Third Avenue SE, Minneapolis, MN 55414

    2012-01-01

    Wake recovery constrains the downstream spacing and density of turbines that can be deployed in turbine farms and limits the amount of energy that can be produced at a hydrokinetic energy site. This study investigates the wake recovery at the downstream of a 1:10 axial flow turbine model using a pulse-to-pulse coherent Acoustic Doppler Profiler (ADP). In addition, turbine inflow and outflow velocities were measured for calculating the thrust on the turbine. The result shows that the depth-averaged longitudinal velocity recovers to 97% of the inflow velocity at 35 turbine diameter (D) downstream of the turbine.

  3. UNVEILING THE DETAILED DENSITY AND VELOCITY STRUCTURES OF THE PROTOSTELLAR CORE B335

    Energy Technology Data Exchange (ETDEWEB)

    Kurono, Yasutaka; Saito, Masao; Kamazaki, Takeshi; Morita, Koh-Ichiro; Kawabe, Ryohei, E-mail: yasutaka.kurono@nao.ac.jp [Chile Observatory, National Astronomical Observatory of Japan, Osawa 2-21-1, Mitaka, Tokyo 181-8588 (Japan)

    2013-03-10

    We present an observational study of the protostellar core B335 harboring a low-mass Class 0 source. The observations of the H{sup 13}CO{sup +}(J = 1-0) line emission were carried out using the Nobeyama 45 m telescope and Nobeyama Millimeter Array. Our combined image of the interferometer and single-dish data depicts detailed structures of the dense envelope within the core. We found that the core has a radial density profile of n(r){proportional_to}r {sup -p} and a reliable difference in the power-law indices between the outer and inner regions of the core: p Almost-Equal-To 2 for r {approx}> 4000 AU and p Almost-Equal-To 1.5 for r {approx}< 4000 AU. The dense core shows a slight overall velocity gradient of {approx}1.0 km s{sup -1} over the scale of 20, 000 AU across the outflow axis. We believe that this velocity gradient represents a solid-body-like rotation of the core. The dense envelope has a quite symmetrical velocity structure with a remarkable line broadening toward the core center, which is especially prominent in the position-velocity diagram across the outflow axis. The model calculations of position-velocity diagrams do a good job of reproducing observational results using the collapse model of an isothermal sphere in which the core has an inner free-fall region and an outer region conserving the conditions at the formation stage of a central stellar object. We derived a central stellar mass of {approx}0.1 M{sub Sun }, and suggest a small inward velocity, v{sub r{>=}r{sub i{sub n{sub f}}}}{approx}0 km s{sup -1} in the outer core at {approx}> 4000 AU. We concluded that our data can be well explained by gravitational collapse with a quasi-static initial condition, such as Shu's model, or by the isothermal collapse of a marginally critical Bonnor-Ebert sphere.

  4. Improvements in seismic event locations in a deep western U.S. coal mine using tomographic velocity models and an evolutionary search algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Adam Lurka; Peter Swanson [Central Mining Institute, Katowice (Poland)

    2009-09-15

    Methods of improving seismic event locations were investigated as part of a research study aimed at reducing ground control safety hazards. Seismic event waveforms collected with a 23-station three-dimensional sensor array during longwall coal mining provide the data set used in the analyses. A spatially variable seismic velocity model is constructed using seismic event sources in a passive tomographic method. The resulting three-dimensional velocity model is used to relocate seismic event positions. An evolutionary optimization algorithm is implemented and used in both the velocity model development and in seeking improved event location solutions. Results obtained using the different velocity models are compared. The combination of the tomographic velocity model development and evolutionary search algorithm provides improvement to the event locations. 13 refs., 5 figs., 4 tabs.

  5. Modeling cognition dynamics and its application to human reliability analysis

    International Nuclear Information System (INIS)

    Mosleh, A.; Smidts, C.; Shen, S.H.

    1996-01-01

    For the past two decades, a number of approaches have been proposed for the identification and estimation of the likelihood of human errors, particularly for use in the risk and reliability studies of nuclear power plants. Despite the wide-spread use of the most popular among these methods, their fundamental weaknesses are widely recognized, and the treatment of human reliability has been considered as one of the soft spots of risk studies of large technological systems. To alleviate the situation, new efforts have focused on the development of human reliability models based on a more fundamental understanding of operator response and its cognitive aspects

  6. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  7. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  8. Multi-state reliability for coolant pump based on dependent competitive failure model

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Zhao Xinwen; Chen Ling

    2013-01-01

    By taking into account the effect of degradation due to internal vibration and external shocks. and based on service environment and degradation mechanism of nuclear power plant coolant pump, a multi-state reliability model of coolant pump was proposed for the system that involves competitive failure process between shocks and degradation. Using this model, degradation state probability and system reliability were obtained under the consideration of internal vibration and external shocks for the degraded coolant pump. It provided an effective method to reliability analysis for coolant pump in nuclear power plant based on operating environment. The results can provide a decision making basis for design changing and maintenance optimization. (authors)

  9. Traveling waves in an optimal velocity model of freeway traffic

    Science.gov (United States)

    Berg, Peter; Woods, Andrew

    2001-03-01

    Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].

  10. Cluster-based upper body marker models for three-dimensional kinematic analysis: Comparison with an anatomical model and reliability analysis.

    Science.gov (United States)

    Boser, Quinn A; Valevicius, Aïda M; Lavoie, Ewen B; Chapman, Craig S; Pilarski, Patrick M; Hebert, Jacqueline S; Vette, Albert H

    2018-04-27

    Quantifying angular joint kinematics of the upper body is a useful method for assessing upper limb function. Joint angles are commonly obtained via motion capture, tracking markers placed on anatomical landmarks. This method is associated with limitations including administrative burden, soft tissue artifacts, and intra- and inter-tester variability. An alternative method involves the tracking of rigid marker clusters affixed to body segments, calibrated relative to anatomical landmarks or known joint angles. The accuracy and reliability of applying this cluster method to the upper body has, however, not been comprehensively explored. Our objective was to compare three different upper body cluster models with an anatomical model, with respect to joint angles and reliability. Non-disabled participants performed two standardized functional upper limb tasks with anatomical and cluster markers applied concurrently. Joint angle curves obtained via the marker clusters with three different calibration methods were compared to those from an anatomical model, and between-session reliability was assessed for all models. The cluster models produced joint angle curves which were comparable to and highly correlated with those from the anatomical model, but exhibited notable offsets and differences in sensitivity for some degrees of freedom. Between-session reliability was comparable between all models, and good for most degrees of freedom. Overall, the cluster models produced reliable joint angles that, however, cannot be used interchangeably with anatomical model outputs to calculate kinematic metrics. Cluster models appear to be an adequate, and possibly advantageous alternative to anatomical models when the objective is to assess trends in movement behavior. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  12. Site-response Estimation by 1D Heterogeneous Velocity Model using Borehole Log and its Relationship to Damping Factor

    International Nuclear Information System (INIS)

    Sato, Hiroaki

    2014-01-01

    In the Niigata area, which suffered from several large earthquakes such as the 2007 Chuetsu-oki earthquake, geographical observation that elucidates the S-wave structure of the underground is advancing. Modeling of S-wave velocity structure in the subsurface is underway to enable simulation of long-period ground motion. The one-dimensional velocity model by inverse analysis of micro-tremors is sufficiently appropriate for long-period site response but not for short-period, which is important for ground motion evaluation at NPP sites. The high-frequency site responses may be controlled by the strength of heterogeneity of underground structure because the heterogeneity of the 1D model plays an important role in estimating high-frequency site responses and is strongly related to the damping factor of the 1D layered velocity model. (author)

  13. Role of frameworks, models, data, and judgment in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hannaman, G W

    1986-05-01

    Many advancements in the methods for treating human interactions in PRA studies have occurred in the last decade. These advancements appear to increase the capability of PRAs to extend beyond just the assessment of the human's importance to safety. However, variations in the application of these advanced models, data, and judgements in recent PRAs make quantitative comparisons among studies extremely difficult. This uncertainty in the analysis diminishes the usefulness of the PRA study for upgrading procedures, enhancing traning, simulator design, technical specification guidance, and for aid in designing the man-machine interface. Hence, there is a need for a framework to guide analysts in incorporating human interactions into the PRA systems analyses so that future users of a PRA study will have a clear understanding of the approaches, models, data, and assumptions which were employed in the initial study. This paper describes the role of the systematic human action reliability procedure (SHARP) in providing a road map through the complex terrain of human reliability that promises to improve the reproducibility of such analysis in the areas of selecting the models, data, representations, and assumptions. Also described is the role that a human cognitive reliability model can have in collecting data from simulators and helping analysts assign human reliability parameters in a PRA study. Use of these systematic approaches to perform or upgrade existing PRAs promises to make PRA studies more useful as risk management tools.

  14. Allowable Pressure In Soils and Rocks by Seismic Wave Velocities

    International Nuclear Information System (INIS)

    Tezcan, S.; Keceli, A.; Oezdemir, Z.

    2007-01-01

    Firstly, the historical background is presented for the determination of ultimate bearing capacity of shallow foundations. The principles of plastic equilibrium used in the classical formulation of the ultimate bearing capacity are reviewed, followed by a discussion about the sources of approximations inherent in the classical theory. Secondly, based on a variety of case histories of site investigations, including extensive bore hole data, laboratory testing and geophysical prospecting, an empirical formulation is proposed for the determination of allowable bearing capacity of shallow foundations. The proposed expression corroborates consistently with the results of the classical theory and is proven to be reliable and safe, also from the view point of maximum allowable settlements. It consists of only two soil parameters, namely, the Institut measured shear wave velocity, and the unit weight. The unit weight may be also determined with sufficient accuracy, by means of another empirical expression, using the P-wave velocity. It is indicated that once the shear and P-wave velocities are measured Institut by an appropriate geophysical survey, the allowable bearing capacity is determined reliably through a single step operation. Such an approach, is considerably cost and time-saving, in practice

  15. Analytical and Mathematical Modeling and Optimization of Fiber Metal Laminates (FMLs subjected to low-velocity impact via combined response surface regression and zero-One programming

    Directory of Open Access Journals (Sweden)

    Faramarz Ashenai Ghasemi

    Full Text Available This paper presents analytical and mathematical modeling and optimization of the dynamic behavior of the fiber metal laminates (FMLs subjected to low-velocity impact. The deflection to thickness (w/h ratio has been identified through the governing equations of the plate that are solved using the first-order shear deformation theory as well as the Fourier series method. With the help of a two degrees-of-freedom system, consisting of springs-masses, and the Choi's linearized Hertzian contact model the interaction between the impactor and the plate is modeled. Thirty-one experiments are conducted on samples of different layer sequences and volume fractions of Al plies in the composite Structures. A reliable fitness function in the form of a strict linear mathematical function constructed. Using an ordinary least square method, response regression coefficients estimated and a zero-one programming technique proposed to optimize the FML plate behavior subjected to any technological or cost restrictions. The results indicated that FML plate behavior is highly affected by layer sequences and volume fractions of Al plies. The results also showed that, embedding Al plies at outer layers of the structure significantly results in a better response of the structure under low-velocity impact, instead of embedding them in the middle or middle and outer layers of the structure.

  16. Critical velocity and anaerobic paddling capacity determined by different mathematical models and number of predictive trials in canoe slalom.

    Science.gov (United States)

    Messias, Leonardo H D; Ferrari, Homero G; Reis, Ivan G M; Scariot, Pedro P M; Manchado-Gobatto, Fúlvia B

    2015-03-01

    The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1) were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter) and anaerobic paddling capacity (APC-anaerobic parameter) were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity). Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC) was considered as the four trials (150, 300, 450 and 600 m). High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00). Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006) and APC (p = 0.016) as well as R² (p = 0.033). Estimates obtained from the first (1) and the fourth (4) predictive trials (150 m = lowest; and 600 m = highest, respectively) were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC) with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates. Key pointsGreat attention must be given for methodological concerns regarding critical velocity protocol applied on canoe slalom, since different estimates were obtained depending on the mathematical model and the predictive trials used.Linear 1 showed the best fits of regression. Furthermore, to the best of our knowledge and considering practical applications, this model is the easiest one to calculate the estimates from critical velocity protocol. Considering this, the abyss between science

  17. Crustal and mantle velocity models of southern Tibet from finite frequency tomography

    Science.gov (United States)

    Liang, Xiaofeng; Shen, Yang; Chen, Yongshun John; Ren, Yong

    2011-02-01

    Using traveltimes of teleseismic body waves recorded by several temporary local seismic arrays, we carried out finite-frequency tomographic inversions to image the three-dimensional velocity structure beneath southern Tibet to examine the roles of the upper mantle in the formation of the Tibetan Plateau. The results reveal a region of relatively high P and S wave velocity anomalies extending from the uppermost mantle to at least 200 km depth beneath the Higher Himalaya. We interpret this high-velocity anomaly as the underthrusting Indian mantle lithosphere. There is a strong low P and S wave velocity anomaly that extends from the lower crust to at least 200 km depth beneath the Yadong-Gulu rift, suggesting that rifting in southern Tibet is probably a process that involves the entire lithosphere. Intermediate-depth earthquakes in southern Tibet are located at the top of an anomalous feature in the mantle with a low Vp, a high Vs, and a low Vp/Vs ratio. One possible explanation for this unusual velocity anomaly is the ongoing granulite-eclogite transformation. Together with the compressional stress from the collision, eclogitization and the associated negative buoyancy force offer a plausible mechanism that causes the subduction of the Indian mantle lithosphere beneath the Higher Himalaya. Our tomographic model and the observation of north-dipping lineations in the upper mantle suggest that the Indian mantle lithosphere has been broken laterally in the direction perpendicular to the convergence beneath the north-south trending rifts and subducted in a progressive, piecewise and subparallel fashion with the current one beneath the Higher Himalaya.

  18. The Three-Dimensional Velocity Distribution of Wide Gap Taylor-Couette Flow Modelled by CFD

    Directory of Open Access Journals (Sweden)

    David Shina Adebayo

    2016-01-01

    Full Text Available A numerical investigation is conducted for the flow between two concentric cylinders with a wide gap, relevant to bearing chamber applications. This wide gap configuration has received comparatively less attention than narrow gap journal bearing type geometries. The flow in the gap between an inner rotating cylinder and an outer stationary cylinder has been modelled as an incompressible flow using an implicit finite volume RANS scheme with the realisable k-ε model. The model flow is above the critical Taylor number at which axisymmetric counterrotating Taylor vortices are formed. The tangential velocity profiles at all axial locations are different from typical journal bearing applications, where the velocity profiles are quasilinear. The predicted results led to two significant findings of impact in rotating machinery operations. Firstly, the axial variation of the tangential velocity gradient induces an axially varying shear stress, resulting in local bands of enhanced work input to the working fluid. This is likely to cause unwanted heat transfer on the surface in high torque turbomachinery applications. Secondly, the radial inflow at the axial end-wall boundaries is likely to promote the transport of debris to the junction between the end-collar and the rotating cylinder, causing the build-up of fouling in the seal.

  19. 3-D Upper-Mantle Shear Velocity Model Beneath the Contiguous United States Based on Broadband Surface Wave from Ambient Seismic Noise

    Science.gov (United States)

    Xie, Jun; Chu, Risheng; Yang, Yingjie

    2018-05-01

    Ambient noise seismic tomography has been widely used to study crustal and upper-mantle shear velocity structures. Most studies, however, concentrate on short period (structure on a continental scale. We use broadband Rayleigh wave phase velocities to obtain a 3-D V S structures beneath the contiguous United States at period band of 10-150 s. During the inversion, 1-D shear wave velocity profile is parameterized using B-spline at each grid point and is inverted with nonlinear Markov Chain Monte Carlo method. Then, a 3-D shear velocity model is constructed by assembling all the 1-D shear velocity profiles. Our model is overall consistent with existing models which are based on multiple datasets or data from earthquakes. Our model along with the other post-USArray models reveal lithosphere structures in the upper mantle, which are consistent with the geological tectonic background (e.g., the craton root and regional upwelling provinces). The model has comparable resolution on lithosphere structures compared with many published results and can be used for future detailed regional or continental studies and analysis.

  20. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  1. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  2. The Dynamics of M15: Observations of the Velocity Dispersion Profile and Fokker-Planck Models

    Science.gov (United States)

    Dull, J. D.; Cohn, H. N.; Lugger, P. M.; Murphy, B. W.; Seitzer, P. O.; Callanan, P. J.; Rutten, R. G. M.; Charles, P. A.

    1997-05-01

    We report a new measurement of the velocity dispersion profile within 1' (3 pc) of the center of the globular cluster M15 (NGC 7078), using long-slit spectra from the 4.2 m William Herschel Telescope at La Palma Observatory. We obtained spatially resolved spectra for a total of 23 slit positions during two observing runs. During each run, a set of parallel slit positions was used to map out the central region of the cluster; the position angle used during the second run was orthogonal to that used for the first. The spectra are centered in wavelength near the Ca II infrared triplet at 8650 Å, with a spectral range of about 450 Å. We determined radial velocities by cross-correlation techniques for 131 cluster members. A total of 32 stars were observed more than once. Internal and external comparisons indicate a velocity accuracy of about 4 km s-1. The velocity dispersion profile rises from about σ = 7.2 +/- 1.4 km s-1 near 1' from the center of the cluster to σ = 13.9 +/- 1.8 km s-1 at 20". Inside of 20", the dispersion remains approximately constant at about 10.2 +/- 1.4 km s-1 with no evidence for a sharp rise near the center. This last result stands in contrast with that of Peterson, Seitzer, & Cudworth who found a central velocity dispersion of 25 +/- 7 km s-1, based on a line-broadening measurement. Our velocity dispersion profile is in good agreement with those determined in the recent studies of Gebhardt et al. and Dubath & Meylan. We have developed a new set of Fokker-Planck models and have fitted these to the surface brightness and velocity dispersion profiles of M15. We also use the two measured millisecond pulsar accelerations as constraints. The best-fitting model has a mass function slope of x = 0.9 (where 1.35 is the slope of the Salpeter mass function) and a total mass of 4.9 × 105 M⊙. This model contains approximately 104 neutron stars (3% of the total mass), the majority of which lie within 6" (0.2 pc) of the cluster center. Since the

  3. OSS reliability measurement and assessment

    CERN Document Server

    Yamada, Shigeru

    2016-01-01

    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  4. Constraining fault interpretation through tomographic velocity gradients: application to northern Cascadia

    Directory of Open Access Journals (Sweden)

    K. Ramachandran

    2012-02-01

    Full Text Available Spatial gradients of tomographic velocities are seldom used in interpretation of subsurface fault structures. This study shows that spatial velocity gradients can be used effectively in identifying subsurface discontinuities in the horizontal and vertical directions. Three-dimensional velocity models constructed through tomographic inversion of active source and/or earthquake traveltime data are generally built from an initial 1-D velocity model that varies only with depth. Regularized tomographic inversion algorithms impose constraints on the roughness of the model that help to stabilize the inversion process. Final velocity models obtained from regularized tomographic inversions have smooth three-dimensional structures that are required by the data. Final velocity models are usually analyzed and interpreted either as a perturbation velocity model or as an absolute velocity model. Compared to perturbation velocity model, absolute velocity models have an advantage of providing constraints on lithology. Both velocity models lack the ability to provide sharp constraints on subsurface faults. An interpretational approach utilizing spatial velocity gradients applied to northern Cascadia shows that subsurface faults that are not clearly interpretable from velocity model plots can be identified by sharp contrasts in velocity gradient plots. This interpretation resulted in inferring the locations of the Tacoma, Seattle, Southern Whidbey Island, and Darrington Devil's Mountain faults much more clearly. The Coast Range Boundary fault, previously hypothesized on the basis of sedimentological and tectonic observations, is inferred clearly from the gradient plots. Many of the fault locations imaged from gradient data correlate with earthquake hypocenters, indicating their seismogenic nature.

  5. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    Science.gov (United States)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  6. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    Energy Technology Data Exchange (ETDEWEB)

    Pegg, E.C., E-mail: elise.pegg@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Mellon, S.J., E-mail: stephen.mellon@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Salmon, G. [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Alvand, A., E-mail: abtin.alvand@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Pandit, H., E-mail: hemant.pandit@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Murray, D.W., E-mail: david.murray@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Gill, H.S., E-mail: richie.gill@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom)

    2012-10-15

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements.

  7. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    International Nuclear Information System (INIS)

    Pegg, E.C.; Mellon, S.J.; Salmon, G.; Alvand, A.; Pandit, H.; Murray, D.W.; Gill, H.S.

    2012-01-01

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements

  8. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  9. Role of recent research in improving check valve reliability at nuclear power plants

    International Nuclear Information System (INIS)

    Kalsi, M.S.; Horst, C.L.; Wang, J.K.; Sharma, V.

    1990-01-01

    Check valve failures at nuclear power plants in recent years have led to serious safety concerns, and caused extensive damage to other plant components which had a significant impact on plant availability. In order to understand the failure mechanism and improve the reliability of check valves, a systematic research effort was proposed by Kalsi Engineering, Inc. to U.S. Nuclear Regulatory Commission (NRC). The overall goal of the research was to develop models for predicting the performance and degradation of swing check valves in nuclear power plant systems so that appropriate preventive maintenance or design modifications can be performed to improve the reliability of check valves. Under Phase I of this research, a large matrix of tests was run with instrumented swing check valves to determine the stability of the disc under a variety of upstream flow disturbances, covering a wide range of disc stop positions and flow velocities in two different valve sizes. The goals of Phase II research were to develop predictive models which quantify the anticipated degradation of swing check valves that have flow disturbances closely upstream of the valve and are operating under flow velocities that do not result in full disc opening. This research allows the inspection/maintenance activities to be focussed on those check valves that are more likely to suffer premature degradation. The quantitative wear and fatigue prediction methodology can be used to develop a sound preventive maintenance program. The results of the research also show the improvements in check valve performance/reliability that can be achieved by certain modifications in the valve design

  10. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  11. Damage Model for Reliability Assessment of Solder Joints in Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    environmental factors. Reliability assessment for such type of products conventionally is performed by classical reliability techniques based on test data. Usually conventional reliability approaches are time and resource consuming activities. Thus in this paper we choose a physics of failure approach to define...... damage model by Miner’s rule. Our attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Based on the proposed method it is described how to find the damage level for a given temperature loading profile. The proposed method is discussed...

  12. Seismic waves in 3-D: from mantle asymmetries to reliable seismic hazard assessment

    Science.gov (United States)

    Panza, Giuliano F.; Romanelli, Fabio

    2014-10-01

    A global cross-section of the Earth parallel to the tectonic equator (TE) path, the great circle representing the equator of net lithosphere rotation, shows a difference in shear wave velocities between the western and eastern flanks of the three major oceanic rift basins. The low-velocity layer in the upper asthenosphere, at a depth range of 120 to 200 km, is assumed to represent the decoupling between the lithosphere and the underlying mantle. Along the TE-perturbed (TE-pert) path, a ubiquitous LVZ, about 1,000-km-wide and 100-km-thick, occurs in the asthenosphere. The existence of the TE-pert is a necessary prerequisite for the existence of a continuous global flow within the Earth. Ground-shaking scenarios were constructed using a scenario-based method for seismic hazard analysis (NDSHA), using realistic and duly validated synthetic time series, and generating a data bank of several thousands of seismograms that account for source, propagation, and site effects. Accordingly, with basic self-organized criticality concepts, NDSHA permits the integration of available information provided by the most updated seismological, geological, geophysical, and geotechnical databases for the site of interest, as well as advanced physical modeling techniques, to provide a reliable and robust background for the development of a design basis for cultural heritage and civil infrastructures. Estimates of seismic hazard obtained using the NDSHA and standard probabilistic approaches are compared for the Italian territory, and a case-study is discussed. In order to enable a reliable estimation of the ground motion response to an earthquake, three-dimensional velocity models have to be considered, resulting in a new, very efficient, analytical procedure for computing the broadband seismic wave-field in a 3-D anelastic Earth model.

  13. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  14. Proposed method for reconstructing velocity profiles using a multi-electrode electromagnetic flow meter

    International Nuclear Information System (INIS)

    Kollár, László E; Lucas, Gary P; Zhang, Zhichao

    2014-01-01

    An analytical method is developed for the reconstruction of velocity profiles using measured potential distributions obtained around the boundary of a multi-electrode electromagnetic flow meter (EMFM). The method is based on the discrete Fourier transform (DFT), and is implemented in Matlab. The method assumes the velocity profile in a section of a pipe as a superposition of polynomials up to sixth order. Each polynomial component is defined along a specific direction in the plane of the pipe section. For a potential distribution obtained in a uniform magnetic field, this direction is not unique for quadratic and higher-order components; thus, multiple possible solutions exist for the reconstructed velocity profile. A procedure for choosing the optimum velocity profile is proposed. It is applicable for single-phase or two-phase flows, and requires measurement of the potential distribution in a non-uniform magnetic field. The potential distribution in this non-uniform magnetic field is also calculated for the possible solutions using weight values. Then, the velocity profile with the calculated potential distribution which is closest to the measured one provides the optimum solution. The reliability of the method is first demonstrated by reconstructing an artificial velocity profile defined by polynomial functions. Next, velocity profiles in different two-phase flows, based on results from the literature, are used to define the input velocity fields. In all cases, COMSOL Multiphysics is used to model the physical specifications of the EMFM and to simulate the measurements; thus, COMSOL simulations produce the potential distributions on the internal circumference of the flow pipe. These potential distributions serve as inputs for the analytical method. The reconstructed velocity profiles show satisfactory agreement with the input velocity profiles. The method described in this paper is most suitable for stratified flows and is not applicable to axisymmetric flows in

  15. Propagation of the Semidiurnal Internal Tide: Phase Velocity Versus Group Velocity

    Science.gov (United States)

    Zhao, Zhongxiang

    2017-12-01

    The superposition of two waves of slightly different wavelengths has long been used to illustrate the distinction between phase velocity and group velocity. The first-mode M2 and S2 internal tides exemplify such a two-wave model in the natural ocean. The M2 and S2 tidal frequencies are 1.932 and 2 cycles per day, respectively, and their superposition forms a spring-neap cycle in the semidiurnal band. The spring-neap cycle acts like a wave, with its frequency, wave number, and phase being the differences of the M2 and S2 internal tides. The spring-neap cycle and energy of the semidiurnal internal tide propagate at the group velocity. Long-range propagation of M2 and S2 internal tides in the North Pacific is observed by satellite altimetry. Along a 3,400 km beam spanning 24°-54°N, the M2 and S2 travel times are 10.9 and 11.2 days, respectively. For comparison, it takes the spring-neap cycle 21.1 days to travel over this distance. Spatial maps of the M2 phase velocity, the S2 phase velocity, and the group velocity are determined from phase gradients of the corresponding satellite observed internal tide fields. The observed phase and group velocities agree with theoretical values estimated using the World Ocean Atlas 2013 annual-mean ocean stratification.

  16. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  17. Effect of Low Co-flow Air Velocity on Hydrogen-air Non-premixed Turbulent Flame Model

    Directory of Open Access Journals (Sweden)

    Noor Mohsin Jasim

    2017-08-01

    Full Text Available The aim of this paper is to provide information concerning the effect of low co-flow velocity on the turbulent diffusion flame for a simple type of combustor, a numerical simulated cases of turbulent diffusion hydrogen-air flame are performed. The combustion model used in this investigation is based on chemical equilibrium and kinetics to simplify the complexity of the chemical mechanism. Effects of increased co-flowing air velocity on temperature, velocity components (axial and radial, and reactants have been investigated numerically and examined. Numerical results for temperature are compared with the experimental data. The comparison offers a good agreement. All numerical simulations have been performed using the Computational Fluid Dynamics (CFD commercial code FLUENT. A comparison among the various co-flow air velocities, and their effects on flame behavior and temperature fields are presented.

  18. Shaping the distribution of vertical velocities of antihydrogen in GBAR

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, G.; Lambrecht, A.; Reynaud, S. [CNRS, ENS, UPMC, Laboratoire Kastler-Brossel, Paris (France); Debu, P. [CEA-Saclay, Institut de Recherche sur les lois Fondamentales de l' Univers, Gif-sur-Yvette (France); Nesvizhevsky, V.V. [Institut Max von Laue-Paul Langevin, Grenoble (France); Voronin, A.Yu. [P.N. Lebedev Physical Institute, Moscow (Russian Federation)

    2014-01-15

    GBAR is a project aiming at measuring the freefall acceleration of gravity for antimatter, namely antihydrogen atoms (H). The precision of this timing experiment depends crucially on the dispersion of initial vertical velocities of the atoms as well as on the reliable control of their distribution.We propose to use a new method for shaping the distribution of the vertical velocities of H, which improves these factors simultaneously. The method is based on quantum reflection of elastically and specularly bouncing H with small initial vertical velocity on a bottom mirror disk, and absorption of atoms with large initial vertical velocities on a top rough disk.We estimate statistical and systematic uncertainties, and we show that the accuracy for measuring the free fall acceleration g of H could be pushed below 10{sup -3} under realistic experimental conditions. (orig.)

  19. Shaping the distribution of vertical velocities of antihydrogen in GBAR

    CERN Document Server

    Dufour, G.; Lambrecht, A.; Nesvizhevsky, V.V.; Reynaud, S.; Voronin, A.Yu.

    2014-01-30

    GBAR is a project aiming at measuring the free fall acceleration of gravity for antimatter, namely antihydrogen atoms ($\\overline{\\mathrm{H}}$). Precision of this timing experiment depends crucially on the dispersion of initial vertical velocities of the atoms as well as on the reliable control of their distribution. We propose to use a new method for shaping the distribution of vertical velocities of $\\overline{\\mathrm{H}}$, which improves these factors simultaneously. The method is based on quantum reflection of elastically and specularly bouncing $\\overline{\\mathrm{H}}$ with small initial vertical velocity on a bottom mirror disk, and absorption of atoms with large initial vertical velocities on a top rough disk. We estimate statistical and systematic uncertainties, and show that the accuracy for measuring the free fall acceleration $\\overline{g}$ of $\\overline{\\mathrm{H}}$ could be pushed below $10^{-3}$ under realistic experimental conditions.

  20. The solidification velocity of nickel and titanium alloys

    Science.gov (United States)

    Altgilbers, Alex Sho

    2002-09-01

    The solidification velocity of several Ni-Ti, Ni-Sn, Ni-Si, Ti-Al and Ti-Ni alloys were measured as a function of undercooling. From these results, a model for alloy solidification was developed that can be used to predict the solidification velocity as a function of undercooling more accurately. During this investigation a phenomenon was observed in the solidification velocity that is a direct result of the addition of the various alloying elements to nickel and titanium. The additions of the alloying elements resulted in an additional solidification velocity plateau at intermediate undercoolings. Past work has shown a solidification velocity plateau at high undercoolings can be attributed to residual oxygen. It is shown that a logistic growth model is a more accurate model for predicting the solidification of alloys. Additionally, a numerical model is developed from simple description of the effect of solute on the solidification velocity, which utilizes a Boltzmann logistic function to predict the plateaus that occur at intermediate undercoolings.

  1. Tracking reliability for space cabin-borne equipment in development by Crow model.

    Science.gov (United States)

    Chen, J D; Jiao, S J; Sun, H L

    2001-12-01

    Objective. To study and track the reliability growth of manned spaceflight cabin-borne equipment in the course of its development. Method. A new technique of reliability growth estimation and prediction, which is composed of the Crow model and test data conversion (TDC) method was used. Result. The estimation and prediction value of the reliability growth conformed to its expectations. Conclusion. The method could dynamically estimate and predict the reliability of the equipment by making full use of various test information in the course of its development. It offered not only a possibility of tracking the equipment reliability growth, but also the reference for quality control in manned spaceflight cabin-borne equipment design and development process.

  2. Modeling of humidity-related reliability in enclosures with electronics

    DEFF Research Database (Denmark)

    Hygum, Morten Arnfeldt; Popok, Vladimir

    2015-01-01

    Reliability of electronics that operate outdoor is strongly affected by environmental factors such as temperature and humidity. Fluctuations of these parameters can lead to water condensation inside enclosures. Therefore, modelling of humidity distribution in a container with air and freely exposed...

  3. Acoustic Velocity and Attenuation in Magnetorhelogical fluids based on an effective density fluid model

    Directory of Open Access Journals (Sweden)

    Shen Min

    2016-01-01

    Full Text Available Magnetrohelogical fluids (MRFs represent a class of smart materials whose rheological properties change in response to the magnetic field, which resulting in the drastic change of the acoustic impedance. This paper presents an acoustic propagation model that approximates a fluid-saturated porous medium as a fluid with a bulk modulus and effective density (EDFM to study the acoustic propagation in the MRF materials under magnetic field. The effective density fluid model derived from the Biot’s theory. Some minor changes to the theory had to be applied, modeling both fluid-like and solid-like state of the MRF material. The attenuation and velocity variation of the MRF are numerical calculated. The calculated results show that for the MRF material the attenuation and velocity predicted with this effective density fluid model are close agreement with the previous predictions by Biot’s theory. We demonstrate that for the MRF material acoustic prediction the effective density fluid model is an accurate alternative to full Biot’s theory and is much simpler to implement.

  4. Business Cases for Microgrids: Modeling Interactions of Technology Choice, Reliability, Cost, and Benefit

    Science.gov (United States)

    Hanna, Ryan

    Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers

  5. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    Science.gov (United States)

    Michaelides, Stylianos

    Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face

  6. The challenge associated with the robust computation of meteor velocities from video and photographic records

    Science.gov (United States)

    Egal, A.; Gural, P. S.; Vaubaillon, J.; Colas, F.; Thuillot, W.

    2017-09-01

    reliably compute the velocity for very low convergence angles (∼ 1°). Despite the better accuracy of this method, the poor conditioning of the velocity propagation models used in the meteor community and currently employed by the multi-parameter fitting method prevent us from optimally computing the pre-atmospheric velocity. Specifically, the deceleration parameters are particularly difficult to determine. The quality of the data provided by the CABERNET network limits the error induced by this effect to achieve an accuracy of about 1% on the velocity computation. Such a precision would not be achievable with lower resolution camera networks and today's commonly used trajectory reduction algorithms. To improve the performance of the multi-parameter fitting method, a linearly independent deceleration formulation needs to be developed.

  7. Regional three-dimensional seismic velocity model of the crust and uppermost mantle of northern California

    Science.gov (United States)

    Thurber, C.; Zhang, H.; Brocher, T.; Langenheim, V.

    2009-01-01

    We present a three-dimensional (3D) tomographic model of the P wave velocity (Vp) structure of northern California. We employed a regional-scale double-difference tomography algorithm that incorporates a finite-difference travel time calculator and spatial smoothing constraints. Arrival times from earthquakes and travel times from controlled-source explosions, recorded at network and/or temporary stations, were inverted for Vp on a 3D grid with horizontal node spacing of 10 to 20 km and vertical node spacing of 3 to 8 km. Our model provides an unprecedented, comprehensive view of the regional-scale structure of northern California, putting many previously identified features into a broader regional context and improving the resolution of a number of them and revealing a number of new features, especially in the middle and lower crust, that have never before been reported. Examples of the former include the complex subducting Gorda slab, a steep, deeply penetrating fault beneath the Sacramento River Delta, crustal low-velocity zones beneath Geysers-Clear Lake and Long Valley, and the high-velocity ophiolite body underlying the Great Valley. Examples of the latter include mid-crustal low-velocity zones beneath Mount Shasta and north of Lake Tahoe. Copyright 2009 by the American Geophysical Union.

  8. 3-D Velocity Model of the Coachella Valley, Southern California Based on Explosive Shots from the Salton Seismic Imaging Project

    Science.gov (United States)

    Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2014-12-01

    We have analyzed explosive shot data from the 2011 Salton Seismic Imaging Project (SSIP) across a 2-D seismic array and 5 profiles in the Coachella Valley to produce a 3-D P-wave velocity model that will be used in calculations of strong ground shaking. Accurate maps of seismicity and active faults rely both on detailed geological field mapping and a suitable velocity model to accurately locate earthquakes. Adjoint tomography of an older version of the SCEC 3-D velocity model shows that crustal heterogeneities strongly influence seismic wave propagation from moderate earthquakes (Tape et al., 2010). These authors improve the crustal model and subsequently simulate the details of ground motion at periods of 2 s and longer for hundreds of ray paths. Even with improvements such as the above, the current SCEC velocity model for the Salton Trough does not provide a match of the timing or waveforms of the horizontal S-wave motions, which Wei et al. (2013) interpret as caused by inaccuracies in the shallow velocity structure. They effectively demonstrate that the inclusion of shallow basin structure improves the fit in both travel times and waveforms. Our velocity model benefits from the inclusion of known location and times of a subset of 126 shots detonated over a 3-week period during the SSIP. This results in an improved velocity model particularly in the shallow crust. In addition, one of the main challenges in developing 3-D velocity models is an uneven stations-source distribution. To better overcome this challenge, we also include the first arrival times of the SSIP shots at the more widely spaced Southern California Seismic Network (SCSN) in our inversion, since the layout of the SSIP is complementary to the SCSN. References: Tape, C., et al., 2010, Seismic tomography of the Southern California crust based on spectral-element and adjoint methods: Geophysical Journal International, v. 180, no. 1, p. 433-462. Wei, S., et al., 2013, Complementary slip distributions

  9. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  10. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  11. Unipedal balance in healthy adults: effect of visual environments yielding decreased lateral velocity feedback.

    Science.gov (United States)

    Deyer, T W; Ashton-Miller, J A

    1999-09-01

    To test the (null) hypotheses that the reliability of unipedal balance is unaffected by the attenuation of visual velocity feedback and that, relative to baseline performance, deterioration of balance success rates from attenuated visual velocity feedback will not differ between groups of young men and older women, and the presence (or absence) of a vertical foreground object will not affect balance success rates. Single blind, single case study. University research laboratory. Two volunteer samples: 26 healthy young men (mean age, 20.0yrs; SD, 1.6); 23 healthy older women (mean age, 64.9 yrs; SD, 7.8). Normalized success rates in unipedal balance task. Subjects were asked to transfer to and maintain unipedal stance for 5 seconds in a task near the limit of their balance capabilities. Subjects completed 64 trials: 54 trials of three experimental visual scenes in blocked randomized sequences of 18 trials and 10 trials in a normal visual environment. The experimental scenes included two that provided strong velocity/weak position feedback, one of which had a vertical foreground object (SVWP+) and one without (SVWP-), and one scene providing weak velocity/strong position (WVSP) feedback. Subjects' success rates in the experimental environments were normalized by the success rate in the normal environment in order to allow comparisons between subjects using a mixed model repeated measures analysis of variance. The normalized success rate was significantly greater in SVWP+ than in WVSP (p = .0001) and SVWP- (p = .013). Visual feedback significantly affected the normalized unipedal balance success rates (p = .001); neither the group effect nor the group X visual environment interaction was significant (p = .9362 and p = .5634, respectively). Normalized success rates did not differ significantly between the young men and older women in any visual environment. Near the limit of the young men's or older women's balance capability, the reliability of transfer to unipedal

  12. Condition Assessment of PC Tendon Duct Filling by Elastic Wave Velocity Mapping

    Directory of Open Access Journals (Sweden)

    Kit Fook Liu

    2014-01-01

    Full Text Available Imaging techniques are high in demand for modern nondestructive evaluation of large-scale concrete structures. The travel-time tomography (TTT technique, which is based on the principle of mapping the change of propagation velocity of transient elastic waves in a measured object, has found increasing application for assessing in situ concrete structures. The primary aim of this technique is to detect defects that exist in a structure. The TTT technique can offer an effective means for assessing tendon duct filling of prestressed concrete (PC elements. This study is aimed at clarifying some of the issues pertaining to the reliability of the technique for this purpose, such as sensor arrangement, model, meshing, type of tendon sheath, thickness of sheath, and material type as well as the scale of inhomogeneity. The work involved 2D simulations of wave motions, signal processing to extract travel time of waves, and tomography reconstruction computation for velocity mapping of defect in tendon duct.

  13. The FFA dynamic stall model. The Beddoes-Leishman dynamic stall model modified for lead-lag oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden)

    1997-08-01

    For calculations of the dynamics of wind turbines the inclusion of a dynamic stall model is necessary in order to obtain reliable results at high winds. For blade vibrations in the lead-lag motion the velocity relative to the blade will vary in time. In the present paper modifications to the Beddoes-Leishman model is presented in order to improve the model for calculations of cases with a varying relative velocity. Comparisons with measurement are also shown and the influence on the calculated aerodynamic damping by the modifications are investigated. (au)

  14. Velocity Estimation of the Main Portal Vein with Transverse Oscillation

    DEFF Research Database (Denmark)

    Brandt, Andreas Hjelm; Hansen, Kristoffer Lindskov; Nielsen, Michael Bachmann

    2015-01-01

    This study evaluates if Transverse Oscillation (TO) can provide reliable and accurate peak velocity estimates of blood flow the main portal vein. TO was evaluated against the recommended and most widely used technique for portal flow estimation, Spectral Doppler Ultrasound (SDU). The main portal...

  15. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  16. Excavatability Assessment of Weathered Sedimentary Rock Mass Using Seismic Velocity Method

    International Nuclear Information System (INIS)

    Bin Mohamad, Edy Tonnizam; Noor, Muhazian Md; Isa, Mohamed Fauzi Bin Md.; Mazlan, Ain Naadia; Saad, Rosli

    2010-01-01

    Seismic refraction method is one of the most popular methods in assessing surface excavation. The main objective of the seismic data acquisition is to delineate the subsurface into velocity profiles as different velocity can be correlated to identify different materials. The physical principal used for the determination of excavatability is that seismic waves travel faster through denser material as compared to less consolidated material. In general, a lower velocity indicates material that is soft and a higher velocity indicates more difficult to be excavated. However, a few researchers have noted that seismic velocity method alone does not correlate well with the excavatability of the material. In this study, a seismic velocity method was used in Nusajaya, Johor to assess the accuracy of this seismic velocity method with excavatability of the weathered sedimentary rock mass. A direct ripping run by monitoring the actual production of ripping has been employed at later stage and compared to the ripper manufacturer's recommendation. This paper presents the findings of the seismic velocity tests in weathered sedimentary area. The reliability of using this method with the actual rippability trials is also presented.

  17. Excavatability Assessment of Weathered Sedimentary Rock Mass Using Seismic Velocity Method

    Science.gov (United States)

    Bin Mohamad, Edy Tonnizam; Saad, Rosli; Noor, Muhazian Md; Isa, Mohamed Fauzi Bin Md.; Mazlan, Ain Naadia

    2010-12-01

    Seismic refraction method is one of the most popular methods in assessing surface excavation. The main objective of the seismic data acquisition is to delineate the subsurface into velocity profiles as different velocity can be correlated to identify different materials. The physical principal used for the determination of excavatability is that seismic waves travel faster through denser material as compared to less consolidated material. In general, a lower velocity indicates material that is soft and a higher velocity indicates more difficult to be excavated. However, a few researchers have noted that seismic velocity method alone does not correlate well with the excavatability of the material. In this study, a seismic velocity method was used in Nusajaya, Johor to assess the accuracy of this seismic velocity method with excavatability of the weathered sedimentary rock mass. A direct ripping run by monitoring the actual production of ripping has been employed at later stage and compared to the ripper manufacturer's recommendation. This paper presents the findings of the seismic velocity tests in weathered sedimentary area. The reliability of using this method with the actual rippability trials is also presented.

  18. A wave propagation model of blood flow in large vessels using an approximate velocity profile function

    NARCIS (Netherlands)

    Bessems, D.; Rutten, M.C.M.; Vosse, van de F.N.

    2007-01-01

    Lumped-parameter models (zero-dimensional) and wave-propagation models (one-dimensional) for pressure and flow in large vessels, as well as fully three-dimensional fluid–structure interaction models for pressure and velocity, can contribute valuably to answering physiological and patho-physiological

  19. The Velocity Distribution of Isolated Radio Pulsars

    Science.gov (United States)

    Arzoumanian, Z.; Chernoff, D. F.; Cordes, J. M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We infer the velocity distribution of radio pulsars based on large-scale 0.4 GHz pulsar surveys. We do so by modelling evolution of the locations, velocities, spins, and radio luminosities of pulsars; calculating pulsed flux according to a beaming model and random orientation angles of spin and beam; applying selection effects of pulsar surveys; and comparing model distributions of measurable pulsar properties with survey data using a likelihood function. The surveys analyzed have well-defined characteristics and cover approx. 95% of the sky. We maximize the likelihood in a 6-dimensional space of observables P, dot-P, DM, absolute value of b, mu, F (period, period derivative, dispersion measure, Galactic latitude, proper motion, and flux density). The models we test are described by 12 parameters that characterize a population's birth rate, luminosity, shutoff of radio emission, birth locations, and birth velocities. We infer that the radio beam luminosity (i) is comparable to the energy flux of relativistic particles in models for spin-driven magnetospheres, signifying that radio emission losses reach nearly 100% for the oldest pulsars; and (ii) scales approximately as E(exp 1/2) which, in magnetosphere models, is proportional to the voltage drop available for acceleration of particles. We find that a two-component velocity distribution with characteristic velocities of 90 km/ s and 500 km/ s is greatly preferred to any one-component distribution; this preference is largely immune to variations in other population parameters, such as the luminosity or distance scale, or the assumed spin-down law. We explore some consequences of the preferred birth velocity distribution: (1) roughly 50% of pulsars in the solar neighborhood will escape the Galaxy, while approx. 15% have velocities greater than 1000 km/ s (2) observational bias against high velocity pulsars is relatively unimportant for surveys that reach high Galactic absolute value of z distances, but is severe for

  20. Application of advanced one sided stress wave velocity measurement in concrete

    International Nuclear Information System (INIS)

    Lee, Joon Hyun; Song, Won Joon; Popovices, J. S.; Achenbach, J. D.

    1997-01-01

    It is of interest to reliably measure the velocity of stress waves in concrete. At present, reliable measurement is not possible for dispersive and attenuating materials such as concrete when access to only one surface of the structure is available, such as in the case of pavement structures. In this paper, a new method for one-sided stress wave velocity determination in concrete is applied to investigate the effects of composition, age and moisture content. This method uses a controlled impact as a stress wave source and two sensitive receivers mounted on the same surface as the impact sites. The novel aspect of the technique is the data collection system which automatically determines the arrival of the generated longitudinal and surface wave arrivals. A conventional ultrasonic through transmission method is used to compare with the results determined by the one-sided method.

  1. Reliability and Maintainability model (RAM) user and maintenance manual. Part 2

    Science.gov (United States)

    Ebeling, Charles E.

    1995-01-01

    This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.

  2. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  3. Axial flow velocity patterns in a normal human pulmonary artery model: pulsatile in vitro studies.

    Science.gov (United States)

    Sung, H W; Yoganathan, A P

    1990-01-01

    It has been clinically observed that the flow velocity patterns in the pulmonary artery are directly modified by disease. The present study addresses the hypothesis that altered velocity patterns relate to the severity of various diseases in the pulmonary artery. This paper lays a foundation for that analysis by providing a detailed description of flow velocity patterns in the normal pulmonary artery, using flow visualization and laser Doppler anemometry techniques. The studies were conducted in an in vitro rigid model in a right heart pulse duplicator system. In the main pulmonary artery, a broad central flow field was observed throughout systole. The maximum axial velocity (150 cm s-1) was measured at peak systole. In the left pulmonary artery, the axial velocities were approximately evenly distributed in the perpendicular plane. However, in the bifurcation plane, they were slightly skewed toward the inner wall at peak systole and during the deceleration phase. In the right pulmonary artery, the axial velocity in the perpendicular plane had a very marked M-shaped profile at peak systole and during the deceleration phase, due to a pair of strong secondary flows. In the bifurcation plane, higher axial velocities were observed along the inner wall, while lower axial velocities were observed along the outer wall and in the center. Overall, relatively low levels of turbulence were observed in all the branches during systole. The maximum turbulence intensity measured was at the boundary of the broad central flow field in the main pulmonary artery at peak systole.

  4. P-wave velocity changes in freezing hard low-porosity rocks: a laboratory-based time-average model

    Directory of Open Access Journals (Sweden)

    D. Draebing

    2012-10-01

    Full Text Available P-wave refraction seismics is a key method in permafrost research but its applicability to low-porosity rocks, which constitute alpine rock walls, has been denied in prior studies. These studies explain p-wave velocity changes in freezing rocks exclusively due to changing velocities of pore infill, i.e. water, air and ice. In existing models, no significant velocity increase is expected for low-porosity bedrock. We postulate, that mixing laws apply for high-porosity rocks, but freezing in confined space in low-porosity bedrock also alters physical rock matrix properties. In the laboratory, we measured p-wave velocities of 22 decimetre-large low-porosity (< 10% metamorphic, magmatic and sedimentary rock samples from permafrost sites with a natural texture (> 100 micro-fissures from 25 °C to −15 °C in 0.3 °C increments close to the freezing point. When freezing, p-wave velocity increases by 11–166% perpendicular to cleavage/bedding and equivalent to a matrix velocity increase from 11–200% coincident to an anisotropy decrease in most samples. The expansion of rigid bedrock upon freezing is restricted and ice pressure will increase matrix velocity and decrease anisotropy while changing velocities of the pore infill are insignificant. Here, we present a modified Timur's two-phase-equation implementing changes in matrix velocity dependent on lithology and demonstrate the general applicability of refraction seismics to differentiate frozen and unfrozen low-porosity bedrock.

  5. A novel model and behavior analysis for a swarm of multi-agent systems with finite velocity

    International Nuclear Information System (INIS)

    Wang Liang-Shun; Wu Zhi-Hai

    2014-01-01

    Inspired by the fact that in most existing swarm models of multi-agent systems the velocity of an agent can be infinite, which is not in accordance with the real applications, we propose a novel swarm model of multi-agent systems where the velocity of an agent is finite. The Lyapunov function method and LaSalle's invariance principle are employed to show that by using the proposed model all of the agents eventually enter into a bounded region around the swarm center and finally tend to a stationary state. Numerical simulations are provided to demonstrate the effectiveness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  6. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  7. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  8. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  9. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.

    2008-01-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  10. Model of the seismic velocity distribution in the upper lithosphere of the Vrancea seismogenic zone and within the adjacent areas

    International Nuclear Information System (INIS)

    Raileanu, Victor; Bala, Andrei

    2002-01-01

    The task of this project is to perform a detailed seismic velocity model of the P waves in the crust and upper mantle crossed by the VRANCEA 2001 seismic line and to interpret it in structural terms. The velocity model aims to contribute to a new geodynamical model of the Eastern Carpathians evolution and to a better understanding of the causes of the Vrancea earthquakes. It is performed in cooperation with the University of Karlsruhe, Germany, and University of Bucharest. The Project will be completed in 5 working stages. Vrancea 2001 is the name of the seismic line recorded with about 780 seismic instruments deployed over more then 600 km length from eastern part of Romania (east Tulcea) through Vrancea area to Aiud and south Oradea. 10 big shots with charges from 300 kg to 1500 kg dynamite were detonated along seismic line. Field data quality is from good to very good and it provides information down to the upper mantle levels. Processing of data has been performed in the first stage of present project and it consisted in merging of all individual field records in seismograms for each shotpoint. Almost 800 individual records for each out of the 10 shots were merged in 10 seismograms with about 800 channels. A seismogram of shot point S (25 km NE of Ramnicu Sarat) is given. It is visible a high energy generated by shotpoint S. Pn wave can be traced until the western end of seismic line, about 25 km from source. In the second stage of project an interpretation of seismic data is achieved for the first 5 seismograms from the eastern half of seismic line, from Tulcea to Ramnicu Sarat. It is used a forward modeling procedure. 5 unidimensional (1D) velocity-depth function models are obtained. P wave velocity-depth function models for shotpoints from O to T are presented. Velocity-depth information is extended down to 40 km for shot R and 80 km for shot S. It should noticed the unusually high velocities at the shallow levels for Dobrogea area (O and P shots) and the

  11. Stochastic reliability and maintenance modeling essays in honor of Professor Shunji Osaki on his 70th birthday

    CERN Document Server

    Nakagawa, Toshio

    2013-01-01

    In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management wo...

  12. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  13. Homogenization and implementation of a 3D regional velocity model in Mexico for its application in moment tensor inversion of intermediate-magnitude earthquakes

    Science.gov (United States)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Caló, Marco

    2017-04-01

    Moment tensor inversions for intermediate and small earthquakes (M. < 4.5) are challenging as they principally excite relatively short period seismic waves that interact strongly with local heterogeneities. Incorporating detailed regional 3D velocity models permits obtaining realistic synthetic seismograms and recover the seismic source parameters these smaller events. Two 3D regional velocity models have recently been developed for Mexico, using surface waves and seismic noise tomography (Spica et al., 2016; Gaite et al., 2015), which could be used to model the waveforms of intermediate magnitud earthquakes in this region. Such models are parameterized as layered velocity profiles and for some of the profiles, the velocity difference between two layers are considerable. The "jump" in velocities between two layers is inconvenient for some methods and algorithms that calculate synthetic waveforms, in particular for the method that we are using, the spectral element method (SPECFEM3D GLOBE, Komatitsch y Tromp, 2000), when the mesh does not follow the layer boundaries. In order to make the velocity models more easily implementec in SPECFEM3D GLOBE it is neccesary to apply a homogenization algorithm (Capdeville et al., 2015) such that the (now anisotropic) layer velocities are smoothly varying with depth. In this work, we apply a homogenization algorithm to the regional velocity models in México for implementing them in SPECFEM3D GLOBE, calculate synthetic waveforms for intermediate-magnitude earthquakes in México and invert them for the seismic moment tensor.

  14. Synchronous Surface Pressure and Velocity Measurements of standard model in hypersonic flow

    Directory of Open Access Journals (Sweden)

    Zhijun Sun

    2018-01-01

    Full Text Available Experiments in the Hypersonic Wind tunnel of NUAA(NHW present synchronous measurements of bow shockwave and surface pressure of a standard blunt rotary model (AGARD HB-2, which was carried out in order to measure the Mach-5-flow above a blunt body by PIV (Particle Image Velocimetry as well as unsteady pressure around the rotary body. Titanium dioxide (Al2O3 Nano particles were seeded into the flow by a tailor-made container. With meticulous care designed optical path, the laser was guided into the vacuum experimental section. The transient pressure was obtained around model by using fast-responding pressure-sensitive paint (PSPsprayed on the model. All the experimental facilities were controlled by Series Pulse Generator to ensure that the data was time related. The PIV measurements of velocities in front of the detached bow shock agreed very well with the calculated value, with less than 3% difference compared to Pitot-pressure recordings. The velocity gradient contour described in accord with the detached bow shock that showed on schlieren. The PSP results presented good agreement with the reference data from previous studies. Our work involving studies of synchronous shock-wave and pressure measurements proved to be encouraging.

  15. Numerical simulating and experimental study on the woven carbon fiber-reinforced composite laminates under low-velocity impact

    Science.gov (United States)

    Liu, Hanyang; Tang, Zhanwen; Pan, Lingying; Zhao, Weidong; Sun, Baogang; Jiang, Wenge

    2016-05-01

    Impact damage has been identified as a critical form of the defects that constantly threatened the reliability of composite structures, such as those used in the aerospace structures and systems. Low energy impacts can introduce barely visible damage and cause the degradation of structural stiffness, furthermore, the flaws caused by low-velocity impact are so dangerous that they can give rise to the further extended delaminations. In order to improve the reliability and load carrying capacity of composite laminates under low-velocity impact, in this paper, the numerical simulatings and experimental studies on the woven fiber-reinforced composite laminates under low-velocity impact with impact energy 16.7J were discussed. The low velocity impact experiment was carried out through drop-weight system as the reason of inertia effect. A numerical progressive damage model was provided, in which the damages of fiber, matrix and interlamina were considered by VUMT subroutine in ABAQUS, to determine the damage modes. The Hashin failure criteria were improved to cover the failure modes of fiber failure in the directions of warp/weft and delaminations. The results of Finite Element Analysis (FEA) were compared with the experimental results of nondestructive examination including the results of ultrasonic C-scan, cross-section stereomicroscope and contact force - time history curves. It is found that the response of laminates under low-velocity impact could be divided into stages with different damage. Before the max-deformation of the laminates occurring, the matrix cracking, fiber breakage and delaminations were simulated during the impactor dropping. During the releasing and rebounding period, matrix cracking and delaminations areas kept increasing in the laminates because of the stress releasing of laminates. Finally, the simulating results showed the good agreements with the results of experiment.

  16. Decay constants of heavy mesons in the relativistic potential model with velocity dependent corrections

    International Nuclear Information System (INIS)

    Avaliani, I.S.; Sisakyan, A.N.; Slepchenko, L.A.

    1992-01-01

    In the relativistic model with the velocity dependent potential the masses and leptonic decay constants of heavy pseudoscalar and vector mesons are computed. The possibility of using this potential is discussed. 11 refs.; 4 tabs

  17. Modeling Atmospheric Turbulence via Rapid Distortion Theory: Spectral Tensor of Velocity and Buoyancy

    DEFF Research Database (Denmark)

    Chougule, Abhijit S.; Mann, Jakob; Kelly, Mark C.

    2017-01-01

    A spectral tensor model is presented for turbulent fluctuations of wind velocity components and temperature, assuming uniform vertical gradients in mean temperature and mean wind speed. The model is built upon rapid distortion theory (RDT) following studies by Mann and by Hanazaki and Hunt, using...... the eddy lifetime parameterization of Mann to make the model stationary. The buoyant spectral tensor model is driven via five parameters: the viscous dissipation rate epsilon, length scale of energy-containing eddies L, a turbulence anisotropy parameter Gamma, gradient Richardson number (Ri) representing...

  18. Modeling the bathtub shape hazard rate function in terms of reliability

    International Nuclear Information System (INIS)

    Wang, K.S.; Hsu, F.S.; Liu, P.P.

    2002-01-01

    In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons

  19. Effects of Adaptation on Discrimination of Whisker Deflection Velocity and Angular Direction in a Model of the Barrel Cortex

    Directory of Open Access Journals (Sweden)

    Mainak J. Patel

    2018-06-01

    Full Text Available Two important stimulus features represented within the rodent barrel cortex are velocity and angular direction of whisker deflection. Each cortical barrel receives information from thalamocortical (TC cells that relay information from a single whisker, and TC input is decoded by barrel regular-spiking (RS cells through a feedforward inhibitory architecture (with inhibition delivered by cortical fast-spiking or FS cells. TC cells encode deflection velocity through population synchrony, while deflection direction is encoded through the distribution of spike counts across the TC population. Barrel RS cells encode both deflection direction and velocity with spike rate, and are divided into functional domains by direction preference. Following repetitive whisker stimulation, system adaptation causes a weakening of synaptic inputs to RS cells and diminishes RS cell spike responses, though evidence suggests that stimulus discrimination may improve following adaptation. In this work, I construct a model of the TC, FS, and RS cells comprising a single barrel system—the model incorporates realistic synaptic connectivity and dynamics and simulates both angular direction (through the spatial pattern of TC activation and velocity (through synchrony of the TC population spikes of a deflection of the primary whisker, and I use the model to examine direction and velocity selectivity of barrel RS cells before and after adaptation. I find that velocity and direction selectivity of individual RS cells (measured over multiple trials sharpens following adaptation, but stimulus discrimination using a simple linear classifier by the RS population response during a single trial (a more biologically meaningful measure than single cell discrimination over multiple trials exhibits strikingly different behavior—velocity discrimination is similar both before and after adaptation, while direction classification improves substantially following adaptation. This is the

  20. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  1. Reliability model for helicopter main gearbox lubrication system using influence diagrams

    International Nuclear Information System (INIS)

    Rashid, H.S.J.; Place, C.S.; Mba, D.; Keong, R.L.C.; Healey, A.; Kleine-Beek, W.; Romano, M.

    2015-01-01

    The loss of oil from a helicopter main gearbox (MGB) leads to increased friction between components, a rise in component surface temperatures, and subsequent mechanical failure of gearbox components. A number of significant helicopter accidents have been caused due to such loss of lubrication. This paper presents a model to assess the reliability of helicopter MGB lubricating systems. Safety risk modeling was conducted for MGB oil system related accidents in order to analyse key failure mechanisms and the contributory factors. Thus, the dominant failure modes for lubrication systems and key contributing components were identified. The Influence Diagram (ID) approach was then employed to investigate reliability issues of the MGB lubrication systems at the level of primary causal factors, thus systematically investigating a complex context of events, conditions, and influences that are direct triggers of the helicopter MGB lubrication system failures. The interrelationships between MGB lubrication system failure types were thus identified, and the influence of each of these factors on the overall MGB lubrication system reliability was assessed. This paper highlights parts of the HELMGOP project, sponsored by the European Aviation Safety Agency to improve helicopter main gearbox reliability. - Highlights: • We investigated methods to optimize helicopter MGB oil system run-dry capability. • Used Influence Diagram to assess design and maintenance factors of MGB oil system. • Factors influencing overall MGB lubrication system reliability were identified. • This globally influences current and future helicopter MGB designs

  2. Modelling the average velocity of propagation of the flame front in a gasoline engine with hydrogen additives

    Science.gov (United States)

    Smolenskaya, N. M.; Smolenskii, V. V.

    2018-01-01

    The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.

  3. Study of the velocity distribution influence upon the pressure pulsations in draft tube model of hydro-turbine

    Science.gov (United States)

    Sonin, V.; Ustimenko, A.; Kuibin, P.; Litvinov, I.; Shtork, S.

    2016-11-01

    One of the mechanisms of generation of powerful pressure pulsations in the circuit of the turbine is a precessing vortex core, formed behind the runner at the operation points with partial or forced loads, when the flow has significant residual swirl. To study periodic pressure pulsations behind the runner the authors of this paper use approaches of experimental modeling and methods of computational fluid dynamics. The influence of velocity distributions at the output of the hydro turbine runner on pressure pulsations was studied based on analysis of the existing and possible velocity distributions in hydraulic turbines and selection of the distribution in the extended range. Preliminary numerical calculations have showed that the velocity distribution can be modeled without reproduction of the entire geometry of the circuit, using a combination of two blade cascades of the rotor and stator. Experimental verification of numerical results was carried out in an air bench, using the method of 3D-printing for fabrication of the blade cascades and the geometry of the draft tube of hydraulic turbine. Measurements of the velocity field at the input to a draft tube cone and registration of pressure pulsations due to precessing vortex core have allowed building correlations between the velocity distribution character and the amplitude-frequency characteristics of the pulsations.

  4. A model for the two-point velocity correlation function in turbulent channel flow

    International Nuclear Information System (INIS)

    Sahay, A.; Sreenivasan, K.R.

    1996-01-01

    A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics

  5. Measurement of two-dimensional bubble velocity by Using tri-fiber-optical Probe

    International Nuclear Information System (INIS)

    Yang Ruichang; Zheng Rongchuan; Zhou Fanling; Liu Ruolei

    2009-01-01

    In this study, an advanced measuring system with a tri-single-fiber-optical-probe has been developed to measure two-dimensional vapor/gas bubble velocity. The use of beam splitting devices instead of beam splitting lens simplifies the optical system, so the system becomes more compact and economic, and more easy to adjust. Corresponding to using triple-optical probe for measuring two-dimensional bubble velocity, a data processing method has been developed, including processing of bubble signals, cancelling of unrelated signals, determining of bubble velocity with cross correlation technique and so on. Using the developed two-dimensional bubble velocity measuring method, the rising velocity of air bubbles in gravitational field was measured. The measured bubble velocities were compared with the empirical correlation available. Deviation was in the range of ±30%. The bubble diameter obtained by data processing is in good accordance with that observed with a synchro-scope and a camera. This shows that the method developed here is reliable.

  6. Reliability and Concurrent Validity of the Narrow Path Walking Test in Persons With Multiple Sclerosis.

    Science.gov (United States)

    Rosenblum, Uri; Melzer, Itshak

    2017-01-01

    About 90% of people with multiple sclerosis (PwMS) have gait instability and 50% fall. Reliable and clinically feasible methods of gait instability assessment are needed. The study investigated the reliability and validity of the Narrow Path Walking Test (NPWT) under single-task (ST) and dual-task (DT) conditions for PwMS. Thirty PwMS performed the NPWT on 2 different occasions, a week apart. Number of Steps, Trial Time, Trial Velocity, Step Length, Number of Step Errors, Number of Cognitive Task Errors, and Number of Balance Losses were measured. Intraclass correlation coefficients (ICC2,1) were calculated from the average values of NPWT parameters. Absolute reliability was quantified from standard error of measurement (SEM) and smallest real difference (SRD). Concurrent validity of NPWT with Functional Reach Test, Four Square Step Test (FSST), 12-item Multiple Sclerosis Walking Scale (MSWS-12), and 2 Minute Walking Test (2MWT) was determined using partial correlations. Intraclass correlation coefficients (ICCs) for most NPWT parameters during ST and DT ranged from 0.46-0.94 and 0.55-0.95, respectively. The highest relative reliability was found for Number of Step Errors (ICC = 0.94 and 0.93, for ST and DT, respectively) and Trial Velocity (ICC = 0.83 and 0.86, for ST and DT, respectively). Absolute reliability was high for Number of Step Errors in ST (SEM % = 19.53%) and DT (SEM % = 18.14%) and low for Trial Velocity in ST (SEM % = 6.88%) and DT (SEM % = 7.29%). Significant correlations for Number of Step Errors and Trial Velocity were found with FSST, MSWS-12, and 2MWT. In persons with PwMS performing the NPWT, Number of Step Errors and Trial Velocity were highly reliable parameters. Based on correlations with other measures of gait instability, Number of Step Errors was the most valid parameter of dynamic balance under the conditions of our test.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http

  7. Structural reliability analysis under evidence theory using the active learning kriging model

    Science.gov (United States)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  8. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  9. A general graphical user interface for automatic reliability modeling

    Science.gov (United States)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  10. Reliability and continuous regeneration model

    Directory of Open Access Journals (Sweden)

    Anna Pavlisková

    2006-06-01

    Full Text Available The failure-free function of an object is very important for the service. This leads to the interest in the determination of the object reliability and failure intensity. The reliability of an element is defined by the theory of probability.The element durability T is a continuous random variate with the probability density f. The failure intensity (tλ is a very important reliability characteristics of the element. Often it is an increasing function, which corresponds to the element ageing. We disposed of the data about a belt conveyor failures recorded during the period of 90 months. The given ses behaves according to the normal distribution. By using a mathematical analysis and matematical statistics, we found the failure intensity function (tλ. The function (tλ increases almost linearly.

  11. Hindrance Velocity Model for Phase Segregation in Suspensions of Poly-dispersed Randomly Oriented Spheroids

    Science.gov (United States)

    Faroughi, S. A.; Huber, C.

    2015-12-01

    Crystal settling and bubbles migration in magmas have significant effects on the physical and chemical evolution of magmas. The rate of phase segregation is controlled by the force balance that governs the migration of particles suspended in the melt. The relative velocity of a single particle or bubble in a quiescent infinite fluid (melt) is well characterized; however, the interplay between particles or bubbles in suspensions and emulsions and its effect on their settling/rising velocity remains poorly quantified. We propose a theoretical model for the hindered velocity of non-Brownian emulsions of nondeformable droplets, and suspensions of spherical solid particles in the creeping flow regime. The model is based on three sets of hydrodynamic corrections: two on the drag coefficient experienced by each particle to account for both return flow and Smoluchowski effects and a correction on the mixture rheology to account for nonlocal interactions between particles. The model is then extended for mono-disperse non-spherical solid particles that are randomly oriented. The non-spherical particles are idealized as spheroids and characterized by their aspect ratio. The poly-disperse nature of natural suspensions is then taken into consideration by introducing an effective volume fraction of particles for each class of mono-disperse particles sizes. Our model is tested against new and published experimental data over a wide range of particle volume fraction and viscosity ratios between the constituents of dispersions. We find an excellent agreement between our model and experiments. We also show two significant applications for our model: (1) We demonstrate that hindered settling can increase mineral residence time by up to an order of magnitude in convecting magma chambers. (2) We provide a model to correct for particle interactions in the conventional hydrometer test to estimate the particle size distribution in soils. Our model offers a greatly improved agreement with

  12. Modelling and estimating degradation processes with application in structural reliability

    International Nuclear Information System (INIS)

    Chiquet, J.

    2007-06-01

    The characteristic level of degradation of a given structure is modeled through a stochastic process called the degradation process. The random evolution of the degradation process is governed by a differential system with Markovian environment. We put the associated reliability framework by considering the failure of the structure once the degradation process reaches a critical threshold. A closed form solution of the reliability function is obtained thanks to Markov renewal theory. Then, we build an estimation methodology for the parameters of the stochastic processes involved. The estimation methods and the theoretical results, as well as the associated numerical algorithms, are validated on simulated data sets. Our method is applied to the modelling of a real degradation mechanism, known as crack growth, for which an experimental data set is considered. (authors)

  13. Stochastic process corrosion growth models for pipeline reliability

    International Nuclear Information System (INIS)

    Bazán, Felipe Alexander Vargas; Beck, André Teófilo

    2013-01-01

    Highlights: •Novel non-linear stochastic process corrosion growth model is proposed. •Corrosion rate modeled as random Poisson pulses. •Time to corrosion initiation and inherent time-variability properly represented. •Continuous corrosion growth histories obtained. •Model is shown to precisely fit actual corrosion data at two time points. -- Abstract: Linear random variable corrosion models are extensively employed in reliability analysis of pipelines. However, linear models grossly neglect well-known characteristics of the corrosion process. Herein, a non-linear model is proposed, where corrosion rate is represented as a Poisson square wave process. The resulting model represents inherent time-variability of corrosion growth, produces continuous growth and leads to mean growth at less-than-one power of time. Different corrosion models are adjusted to the same set of actual corrosion data for two inspections. The proposed non-linear random process corrosion growth model leads to the best fit to the data, while better representing problem physics

  14. Testing the reliability of ice-cream cone model

    Science.gov (United States)

    Pan, Zonghao; Shen, Chenglong; Wang, Chuanbing; Liu, Kai; Xue, Xianghui; Wang, Yuming; Wang, Shui

    2015-04-01

    Coronal Mass Ejections (CME)'s properties are important to not only the physical scene itself but space-weather prediction. Several models (such as cone model, GCS model, and so on) have been raised to get rid of the projection effects within the properties observed by spacecraft. According to SOHO/ LASCO observations, we obtain the 'real' 3D parameters of all the FFHCMEs (front-side full halo Coronal Mass Ejections) within the 24th solar cycle till July 2012, by the ice-cream cone model. Considering that the method to obtain 3D parameters from the CME observations by multi-satellite and multi-angle has higher accuracy, we use the GCS model to obtain the real propagation parameters of these CMEs in 3D space and compare the results with which by ice-cream cone model. Then we could discuss the reliability of the ice-cream cone model.

  15. A review of the progress with statistical models of passive component reliability

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, Bengt O. Y. [Sigma-Phase Inc., Vail (United States)

    2017-03-15

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  16. A Review of the Progress with Statistical Models of Passive Component Reliability

    Directory of Open Access Journals (Sweden)

    Bengt O.Y. Lydell

    2017-03-01

    Full Text Available During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  17. A review of the progress with statistical models of passive component reliability

    International Nuclear Information System (INIS)

    Lydell, Bengt O. Y.

    2017-01-01

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models

  18. Imperfect Preventive Maintenance Model Study Based On Reliability Limitation

    Directory of Open Access Journals (Sweden)

    Zhou Qian

    2016-01-01

    Full Text Available Effective maintenance is crucial for equipment performance in industry. Imperfect maintenance conform to actual failure process. Taking the dynamic preventive maintenance cost into account, the preventive maintenance model was constructed by using age reduction factor. The model regards the minimization of repair cost rate as final target. It use allowed smallest reliability as the replacement condition. Equipment life was assumed to follow two parameters Weibull distribution since it was one of the most commonly adopted distributions to fit cumulative failure problems. Eventually the example verifies the rationality and benefits of the model.

  19. Minimum 1D P wave velocity model for the Cordillera Volcanica de Guanacaste, Costa Rica

    International Nuclear Information System (INIS)

    Araya, Maria C.; Linkimer, Lepolt; Taylor, Waldo

    2016-01-01

    A minimum velocity model is derived from 475 local earthquakes registered by the Observatorio Vulcanologico y Sismologico Arenal Miravalles (OSIVAM) for the Cordillera Volcanica de Guanacaste, between January 2006 and July 2014. The model has consisted of six layers from the surface up to 80 km the depth. The model has presented speeds varying between 3,96 and 7,79 km/s. The corrections obtained from the seismic stations have varied between -0,28 to 0,45, and they have shown a trend of positive values on the volcanic arc and negative on the forearc, in concordance with the crustal thickness. The relocation of earthquakes have presented three main groups of epicenters that could be associated with activity in inferred failures. The minimum ID velocity model has provided a simplified idea of the crustal structure and aims to contribute with the improvement of the routine location of earthquakes performed by OSIVAM. (author) [es

  20. Reliability Models Applied to a System of Power Converters in Particle Accelerators

    OpenAIRE

    Siemaszko, D; Speiser, M; Pittet, S

    2012-01-01

    Several reliability models are studied when applied to a power system containing a large number of power converters. A methodology is proposed and illustrated in the case study of a novel linear particle accelerator designed for reaching high energies. The proposed methods result in the prediction of both reliability and availability of the considered system for optimisation purposes.

  1. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  2. Southern high-velocity stars

    International Nuclear Information System (INIS)

    Augensen, H.J.; Buscombe, W.

    1978-01-01

    Using the model of the Galaxy presented by Eggen, Lynden-Bell and Sandage (1962), plane galactic orbits have been calculated for 800 southern high-velocity stars which possess parallax, proper motion, and radial velocity data. The stars with trigonometric parallaxes were selected from Buscombe and Morris (1958), supplemented by more recent spectroscopic data. Photometric parallaxes from infrared color indices were used for bright red giants studied by Eggen (1970), and for red dwarfs for which Rodgers and Eggen (1974) determined radial velocities. A color-color diagram based on published values of (U-B) and (B-V) for most of these stars is shown. (Auth.)

  3. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    Science.gov (United States)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  4. Do Assimilated Drifter Velocities Improve Lagrangian Predictability in an Operational Ocean Model?

    Science.gov (United States)

    2015-05-01

    extended Kalman filter . Molcard et al. (2005) used a statistical method to cor- relate model and drifter velocities. Taillandier et al. (2006) describe the... temperature and salinity observations. Trajectory angular differ- ences are also reduced. 1. Introduction The importance of Lagrangian forecasts was seen... Temperature , salinity, and sea surface height (SSH, measured along-track by satellite altimeters) observa- tions are typically assimilated in

  5. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  6. Modeling skin temperature to assess the effect of air velocity to mitigate heat stress among growing pigs

    DEFF Research Database (Denmark)

    Bjerg, Bjarne; Pedersen, Poul; Morsing, Svend

    2017-01-01

    It is generally accepted that increased air velocity can help to mitigate heat stress in livestock housing, however, it is not fully clear how much it helps and significant uncertainties exists when the air temperature approaches the animal body temperature. This study aims to develop a skin...... temperature model to generated data for determining the potential effect of air velocity to mitigate heat stress among growing pigs housed in warm environment. The model calculates the skin temperature as function of body temperature, air temperature and the resistances for heat transfer from the body...

  7. Background velocity inversion by phase along reflection wave paths

    KAUST Repository

    Yu, Han; Guo, Bowen; Schuster, Gerard T.

    2014-01-01

    A background velocity model containing the correct lowwavenumber information is desired for both the quality of the migration image and the success of waveform inversion. We propose to invert for the low-wavenumber part of the velocity model by minimizing the phase difference between predicted and observed reflections. The velocity update is exclusively along the reflection wavepaths and, unlike conventional FWI, not along the reflection ellipses. This allows for reconstructing the smoothly varying parts of the background velocity model. Tests with synthetic data show both the benefits and limitations of this method.

  8. Background velocity inversion by phase along reflection wave paths

    KAUST Repository

    Yu, Han

    2014-08-05

    A background velocity model containing the correct lowwavenumber information is desired for both the quality of the migration image and the success of waveform inversion. We propose to invert for the low-wavenumber part of the velocity model by minimizing the phase difference between predicted and observed reflections. The velocity update is exclusively along the reflection wavepaths and, unlike conventional FWI, not along the reflection ellipses. This allows for reconstructing the smoothly varying parts of the background velocity model. Tests with synthetic data show both the benefits and limitations of this method.

  9. A GIS-based Computational Tool for Multidimensional Flow Velocity by Acoustic Doppler Current Profilers

    International Nuclear Information System (INIS)

    Kim, D; Winkler, M; Muste, M

    2015-01-01

    Acoustic Doppler Current Profilers (ADCPs) provide efficient and reliable flow measurements compared to other tools for characteristics of the riverine environments. In addition to originally targeted discharge measurements, ADCPs are increasingly utilized to assess river flow characteristics. The newly developed VMS (Velocity Mapping Software) aims at providing an efficient process for quality assurance, mapping velocity vectors for visualization and facilitating comparison with physical and numerical model results. VMS was designed to provide efficient and smooth work flows for processing groups of transects. The software allows the user to select group of files and subsequently to conduct statistical and graphical quality assurance on the files as a group or individually as appropriate. VMS also enables spatial averaging in horizontal and vertical plane for ADCP data in a single or multiple transects over the same or consecutive cross sections. The analysis results are displayed in numerical and graphical formats. (paper)

  10. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  11. Reliability of Single-Leg Balance and Landing Tests in Rugby Union; Prospect of Using Postural Control to Monitor Fatigue.

    Science.gov (United States)

    Troester, Jordan C; Jasmin, Jason G; Duffield, Rob

    2018-06-01

    The present study examined the inter-trial (within test) and inter-test (between test) reliability of single-leg balance and single-leg landing measures performed on a force plate in professional rugby union players using commercially available software (SpartaMARS, Menlo Park, USA). Twenty-four players undertook test - re-test measures on two occasions (7 days apart) on the first training day of two respective pre-season weeks following 48h rest and similar weekly training loads. Two 20s single-leg balance trials were performed on a force plate with eyes closed. Three single-leg landing trials were performed by jumping off two feet and landing on one foot in the middle of a force plate 1m from the starting position. Single-leg balance results demonstrated acceptable inter-trial reliability (ICC = 0.60-0.81, CV = 11-13%) for sway velocity, anterior-posterior sway velocity, and mediolateral sway velocity variables. Acceptable inter-test reliability (ICC = 0.61-0.89, CV = 7-13%) was evident for all variables except mediolateral sway velocity on the dominant leg (ICC = 0.41, CV = 15%). Single-leg landing results only demonstrated acceptable inter-trial reliability for force based measures of relative peak landing force and impulse (ICC = 0.54-0.72, CV = 9-15%). Inter-test results indicate improved reliability through the averaging of three trials with force based measures again demonstrating acceptable reliability (ICC = 0.58-0.71, CV = 7-14%). Of the variables investigated here, total sway velocity and relative landing impulse are the most reliable measures of single-leg balance and landing performance, respectively. These measures should be considered for monitoring potential changes in postural control in professional rugby union.

  12. Reliability of Single-Leg Balance and Landing Tests in Rugby Union; Prospect of Using Postural Control to Monitor Fatigue

    Directory of Open Access Journals (Sweden)

    Jordan C. Troester, Jason G. Jasmin, Rob Duffield

    2018-06-01

    Full Text Available The present study examined the inter-trial (within test and inter-test (between test reliability of single-leg balance and single-leg landing measures performed on a force plate in professional rugby union players using commercially available software (SpartaMARS, Menlo Park, USA. Twenty-four players undertook test – re-test measures on two occasions (7 days apart on the first training day of two respective pre-season weeks following 48h rest and similar weekly training loads. Two 20s single-leg balance trials were performed on a force plate with eyes closed. Three single-leg landing trials were performed by jumping off two feet and landing on one foot in the middle of a force plate 1m from the starting position. Single-leg balance results demonstrated acceptable inter-trial reliability (ICC = 0.60-0.81, CV = 11-13% for sway velocity, anterior-posterior sway velocity, and mediolateral sway velocity variables. Acceptable inter-test reliability (ICC = 0.61-0.89, CV = 7-13% was evident for all variables except mediolateral sway velocity on the dominant leg (ICC = 0.41, CV = 15%. Single-leg landing results only demonstrated acceptable inter-trial reliability for force based measures of relative peak landing force and impulse (ICC = 0.54-0.72, CV = 9-15%. Inter-test results indicate improved reliability through the averaging of three trials with force based measures again demonstrating acceptable reliability (ICC = 0.58-0.71, CV = 7-14%. Of the variables investigated here, total sway velocity and relative landing impulse are the most reliable measures of single-leg balance and landing performance, respectively. These measures should be considered for monitoring potential changes in postural control in professional rugby union.

  13. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  14. Ultrasonic transverse velocity calibration of standard blocks for use in non-destructive testing

    International Nuclear Information System (INIS)

    Silva, C E R; Braz, D S; Maggi, L E; Felix, R P B Costa

    2015-01-01

    Standard blocks are employed in the verification of the equipment used in Ultrasound Non-Destructive Testing. To assure the metrology reliability of all the measurement process, it is necessary to calibrate or certify these Standard blocks. In this work, the transverse wave velocity and main dimensions were assessed according to the specifications ISO Standards. For transverse wave velocity measurement, a 5 MHz transverse wave transducer, a waveform generator, an oscilloscope and a computer with a program developed in LabVIEW TM were used. Concerning the transverse wave velocity calibration, only two Standard blocks of the 4 tested is in accordance with the standard

  15. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  16. A PRACTICAL APPROACH TO THE GROUND OSCILLATION VELOCITY MEASUREMENT METHOD

    Directory of Open Access Journals (Sweden)

    Siniša Stanković

    2017-01-01

    Full Text Available The use of an explosive’s energy during blasting includes undesired effects on the environment. The seismic influence of a blast, as a major undesired effect, is determined by many national standards, recommendations and calculations where the main parameter is ground oscillation velocity at the field measurement location. There are a few approaches and methods for calculation of expected ground oscillation velocities according to charge weight per delay and the distance from the blast to the point of interest. Utilizations of these methods and formulas do not provide satisfactory results, thus the measured values on diverse distance from the blast field more or less differ from values given by previous calculations. Since blasting works are executed in diverse geological conditions, the aim of this research is the development of a practical and reliable approach which will give a different model for each construction site where blasting works have been or will be executed. The approach is based on a greater number of measuring points in line from the blast field at predetermined distances. This new approach has been compared with other generally used methods and formulas through the use of measurements taken during research along with measurements from several previously executed projects. The results confirmed that the suggested model gives more accurate values.

  17. 2.5D S-wave velocity model of the TESZ area in northern Poland from receiver function analysis

    Science.gov (United States)

    Wilde-Piorko, Monika; Polkowski, Marcin; Grad, Marek

    2016-04-01

    Receiver function (RF) locally provides the signature of sharp seismic discontinuities and information about the shear wave (S-wave) velocity distribution beneath the seismic station. The data recorded by "13 BB Star" broadband seismic stations (Grad et al., 2015) and by few PASSEQ broadband seismic stations (Wilde-Piórko et al., 2008) are analysed to investigate the crustal and upper mantle structure in the Trans-European Suture Zone (TESZ) in northern Poland. The TESZ is one of the most prominent suture zones in Europe separating the young Palaeozoic platform from the much older Precambrian East European craton. Compilation of over thirty deep seismic refraction and wide angle reflection profiles, vertical seismic profiling in over one hundred thousand boreholes and magnetic, gravity, magnetotelluric and thermal methods allowed for creation a high-resolution 3D P-wave velocity model down to 60 km depth in the area of Poland (Grad et al. 2016). On the other hand the receiver function methods give an opportunity for creation the S-wave velocity model. Modified ray-tracing method (Langston, 1977) are used to calculate the response of the structure with dipping interfaces to the incoming plane wave with fixed slowness and back-azimuth. 3D P-wave velocity model are interpolated to 2.5D P-wave velocity model beneath each seismic station and synthetic back-azimuthal sections of receiver function are calculated for different Vp/Vs ratio. Densities are calculated with combined formulas of Berteussen (1977) and Gardner et al. (1974). Next, the synthetic back-azimuthal sections of RF are compared with observed back-azimuthal sections of RF for "13 BB Star" and PASSEQ seismic stations to find the best 2.5D S-wave models down to 60 km depth. National Science Centre Poland provided financial support for this work by NCN grant DEC-2011/02/A/ST10/00284.

  18. Kinematic Modeling of Normal Voluntary Mandibular Opening and Closing Velocity-Initial Study.

    Science.gov (United States)

    Gawriołek, Krzysztof; Gawriołek, Maria; Komosa, Marek; Piotrowski, Paweł R; Azer, Shereen S

    2015-06-01

    Determination and quantification of voluntary mandibular velocity movement has not been a thoroughly studied parameter of masticatory movement. This study attempted to objectively define kinematics of mandibular movement based on numerical (digital) analysis of the relations and interactions of velocity diagram records in healthy female individuals. Using a computerized mandibular scanner (K7 Evaluation Software), 72 diagrams of voluntary mandibular velocity movements (36 for opening, 36 for closing) for women with clinically normal motor and functional activities of the masticatory system were recorded. Multiple measurements were analyzed focusing on the curve for maximum velocity records. For each movement, the loop of temporary velocities was determined. The diagram was then entered into AutoCad calculation software where movement analysis was performed. The real maximum velocity values on opening (Vmax ), closing (V0 ), and average velocity values (Vav ) as well as movement accelerations (a) were recorded. Additionally, functional (A1-A2) and geometric (P1-P4) analysis of loop constituent phases were performed, and the relations between the obtained areas were defined. Velocity means and correlation coefficient values for various velocity phases were calculated. The Wilcoxon test produced the following maximum and average velocity results: Vmax = 394 ± 102, Vav = 222 ± 61 for opening, and Vmax = 409 ± 94, Vav = 225 ± 55 mm/s for closing. Both mandibular movement range and velocity change showed significant variability achieving the highest velocity in P2 phase. Voluntary mandibular velocity presents significant variations between healthy individuals. Maximum velocity is obtained when incisal separation is between 12.8 and 13.5 mm. An improved understanding of the patterns of normal mandibular movements may provide an invaluable diagnostic aid to pathological changes within the masticatory system. © 2014 by the American College of Prosthodontists.

  19. Model-based human reliability analysis: prospects and requirements

    International Nuclear Information System (INIS)

    Mosleh, A.; Chang, Y.H.

    2004-01-01

    Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

  20. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  1. Multiple Model Adaptive Attitude Control of LEO Satellite with Angular Velocity Constraints

    Science.gov (United States)

    Shahrooei, Abolfazl; Kazemi, Mohammad Hosein

    2018-04-01

    In this paper, the multiple model adaptive control is utilized to improve the transient response of attitude control system for a rigid spacecraft. An adaptive output feedback control law is proposed for attitude control under angular velocity constraints and its almost global asymptotic stability is proved. The multiple model adaptive control approach is employed to counteract large uncertainty in parameter space of the inertia matrix. The nonlinear dynamics of a low earth orbit satellite is simulated and the proposed control algorithm is implemented. The reported results show the effectiveness of the suggested scheme.

  2. Automatic creation of Markov models for reliability assessment of safety instrumented systems

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2008-01-01

    After the release of new international functional safety standards like IEC 61508, people care more for the safety and availability of safety instrumented systems. Markov analysis is a powerful and flexible technique to assess the reliability measurements of safety instrumented systems, but it is fallible and time-consuming to create Markov models manually. This paper presents a new technique to automatically create Markov models for reliability assessment of safety instrumented systems. Many safety related factors, such as failure modes, self-diagnostic, restorations, common cause and voting, are included in Markov models. A framework is generated first based on voting, failure modes and self-diagnostic. Then, repairs and common-cause failures are incorporated into the framework to build a complete Markov model. Eventual simplification of Markov models can be done by state merging. Examples given in this paper show how explosively the size of Markov model increases as the system becomes a little more complicated as well as the advancement of automatic creation of Markov models

  3. Integrating software reliability concepts into risk and reliability modeling of digital instrumentation and control systems used in nuclear power plants

    International Nuclear Information System (INIS)

    Arndt, S. A.

    2006-01-01

    As software-based digital systems are becoming more and more common in all aspects of industrial process control, including the nuclear power industry, it is vital that the current state of the art in quality, reliability, and safety analysis be advanced to support the quantitative review of these systems. Several research groups throughout the world are working on the development and assessment of software-based digital system reliability methods and their applications in the nuclear power, aerospace, transportation, and defense industries. However, these groups are hampered by the fact that software experts and probabilistic safety assessment experts view reliability engineering very differently. This paper discusses the characteristics of a common vocabulary and modeling framework. (authors)

  4. Neutron stars velocities and magnetic fields

    Science.gov (United States)

    Paret, Daryel Manreza; Martinez, A. Perez; Ayala, Alejandro.; Piccinelli, G.; Sanchez, A.

    2018-01-01

    We study a model that explain neutron stars velocities due to the anisotropic emission of neutrinos. Strong magnetic fields present in neutron stars are the source of the anisotropy in the system. To compute the velocity of the neutron star we model its core as composed by strange quark matter and analice the properties of a magnetized quark gas at finite temperature and density. Specifically we have obtained the electron polarization and the specific heat of magnetized fermions as a functions of the temperature, chemical potential and magnetic field which allow us to study the velocity of the neutron star as a function of these parameters.

  5. Comparative study of the iron cores in human liver ferritin, its pharmaceutical models and ferritin in chicken liver and spleen tissues using Moessbauer spectroscopy with a high velocity resolution

    Energy Technology Data Exchange (ETDEWEB)

    Alenkina, I.V.; Semionkin, V.A. [Faculty of Physical Techniques and Devices for Quality Control, Ural Federal University, Ekaterinburg (Russian Federation); Faculty of Experimental Physics, Ural Federal University, Ekaterinburg (Russian Federation); Oshtrakh, M.I. [Faculty of Physical Techniques and Devices for Quality Control, Ural Federal University, Ekaterinburg (Russian Federation); Klepova, Yu.V.; Sadovnikov, N.V. [Faculty of Physiology and Biotechnology, Ural State Agricultural Academy, Ekaterinburg, (Russian Federation); Dubiel, S.M. [Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, Krakow (Poland)

    2011-07-01

    Full text: Application of the Moessbauer spectroscopy with a high velocity resolution (4096 channels) for a study of iron-containing biological species is of great interest. Improving the velocity resolution allows to reveal small variations in the electronic structure of iron, and to obtain hyperfine parameters with smaller instrumental (systematic) errors in comparison with measurements performed in 512 channels or less. It also allows a more reliable fitting of complex Moessbauer spectra. In the present study the Moessbauer spectroscopy with the high velocity resolution was used for a comparative analysis of ferritin and its pharmaceutically important models as well as iron storage proteins in a chicken liver and a spleen. The ferritin, an iron storage protein, consists of a nanosized polynuclear iron core formed by a ferrihydrite surrounded by a protein shell. Iron-polysaccharide complexes contain {beta}-FeOOH iron cores coated with various polysaccharides. The Moessbauer spectra of the ferritin and commercial products Imferon, MaltoferR and Ferrum Lek as well as those of the chicken liver and spleen tissues were measured with the high velocity resolution at 295 and 90 K. They were fitted using two models: (1) with a homogeneous iron core (an approximation using one quadrupole doublet), and (2) with a heterogeneous iron core (an approximation using several quadrupole doublets). The model (1) can be used as the first approximation fit to visualize small variations in the hyperfine parameters. Using this model, differences in the Moessbauer hyperfine parameters were obtained in both 295 and 90 K Moessbauer spectra. However, this model was considered as a rough approximation because the measured Moessbauer spectra had non-Lorentzian line shapes. Therefore, the spectra of the ferritin, Imferon, MaltoferR and Ferrum Lek as well as those of the liver and spleen tissues were fitted again using the model (2) in which a different number of the quadrupole doublets was

  6. Modeling human reliability analysis using MIDAS

    International Nuclear Information System (INIS)

    Boring, R. L.

    2006-01-01

    This paper documents current efforts to infuse human reliability analysis (HRA) into human performance simulation. The Idaho National Laboratory is teamed with NASA Ames Research Center to bridge the SPAR-H HRA method with NASA's Man-machine Integration Design and Analysis System (MIDAS) for use in simulating and modeling the human contribution to risk in nuclear power plant control room operations. It is anticipated that the union of MIDAS and SPAR-H will pave the path for cost-effective, timely, and valid simulated control room operators for studying current and next generation control room configurations. This paper highlights considerations for creating the dynamic HRA framework necessary for simulation, including event dependency and granularity. This paper also highlights how the SPAR-H performance shaping factors can be modeled in MIDAS across static, dynamic, and initiator conditions common to control room scenarios. This paper concludes with a discussion of the relationship of the workload factors currently in MIDAS and the performance shaping factors in SPAR-H. (authors)

  7. Three-dimensional models of P wave velocity and P-to-S velocity ratio in the southern central Andes by simultaneous inversion of local earthquake data

    Science.gov (United States)

    Graeber, Frank M.; Asch, Günter

    1999-09-01

    The PISCO'94 (Proyecto de Investigatión Sismológica de la Cordillera Occidental, 1994) seismological network of 31 digital broad band and short-period three-component seismometers was deployed in northern Chile between the Coastal Cordillera and the Western Cordillera. More than 5300 local seismic events were observed in a 100 day period. A subset of high-quality P and S arrival time data was used to invert simultaneously for hypocenters and velocity structure. Additional data from two other networks in the region could be included. The velocity models show a number of prominent anomalies, outlining an extremely thickened crust (about 70 km) beneath the forearc region, an anomalous crustal structure beneath the recent magmatic arc (Western Cordillera) characterized by very low velocities, and a high-velocity slab. A region of an increased Vp/Vs ratio has been found directly above the Wadati-Benioff zone, which might be caused by hydration processes. A zone of lower than average velocities and a high Vp/Vs ratio might correspond to the asthenospheric wedge. The upper edge of the Wadati-Benioff zone is sharply defined by intermediate depth hypocenters, while evidence for a double seismic zone can hardly be seen. Crustal events between the Precordillera and the Western Cordillera have been observed for the first time and are mainly located in the vicinity of the Salar de Atacama down to depths of about 40 km.

  8. Comparison of shear wave velocity measurements assessed with two different ultrasound systems in an ex-vivo tendon strain phantom.

    Science.gov (United States)

    Rosskopf, Andrea B; Bachmann, Elias; Snedeker, Jess G; Pfirrmann, Christian W A; Buck, Florian M

    2016-11-01

    The purpose of this study is to compare the reliability of SW velocity measurements of two different ultrasound systems and their correlation with the tangent traction modulus in a non-static tendon strain model. A bovine tendon was fixed in a custom-made stretching device. Force was applied increasing from 0 up to 18 Newton. During each strain state the tangent traction modulus was determined by the stretcher device, and SW velocity (m/s) measurements using a Siemens S3000 and a Supersonic Aixplorer US machine were done for shear modulus (kPa) calculation. A strong significant positive correlation was found between SW velocity assessed by the two ultrasound systems and the tangent traction modulus (r = 0.827-0.954, p Aixplorer 0.25 ± 0.3 m/s (p = 0.034). Mean difference of SW velocity between the two US-systems was 0.37 ± 0.3 m/s (p = 0.012). In conclusion, SW velocities are highly dependent on mechanical forces in the tendon tissue, but for controlled mechanical loads appear to yield reproducible and comparable measurements using different US systems.

  9. Quantification of Wave Model Uncertainties Used for Probabilistic Reliability Assessments of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2015-01-01

    Wave models used for site assessments are subjected to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Four different wave models are considered, and validation...... data are collected from published scientific research. The bias and the root-mean-square error, as well as the scatter index, are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example, this paper presents how the quantified...... uncertainties can be implemented in probabilistic reliability assessments....

  10. Determination of Wave Model Uncertainties used for Probabilistic Reliability Assessments of Wave Energy Devices

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2014-01-01

    Wave models used for site assessments are subject to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Considered are four different wave models and validation...... data is collected from published scientific research. The bias, the root-mean-square error as well as the scatter index are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example it is shown how the estimated uncertainties can...... be implemented in probabilistic reliability assessments....

  11. Cosmological streaming velocities and large-scale density maxima

    International Nuclear Information System (INIS)

    Peacock, J.A.; Lumsden, S.L.; Heavens, A.F.

    1987-01-01

    The statistical testing of models for galaxy formation against the observed peculiar velocities on 10-100 Mpc scales is considered. If it is assumed that observers are likely to be sited near maxima in the primordial field of density perturbations, then the observed filtered velocity field will be biased to low values by comparison with a point selected at random. This helps to explain how the peculiar velocities (relative to the microwave background) of the local supercluster and the Rubin-Ford shell can be so similar in magnitude. Using this assumption to predict peculiar velocities on two scales, we test models with large-scale damping (i.e. adiabatic perturbations). Allowed models have a damping length close to the Rubin-Ford scale and are mildly non-linear. Both purely baryonic universes and universes dominated by massive neutrinos can account for the observed velocities, provided 0.1 ≤ Ω ≤ 1. (author)

  12. Numerical modeling of probe velocity effects for electromagnetic NDE methods

    Science.gov (United States)

    Shin, Y. K.; Lord, W.

    The present discussion of magnetic flux (MLF) leakage inspection introduces the behavior of motion-induced currents. The results obtained indicate that velocity effects exist at even low probe speeds for magnetic materials, compelling the inclusion of velocity effects in MLF testing of oil pipelines, where the excitation level and pig speed are much higher than those used in the present work. Probe velocity effect studies should influence probe design, defining suitable probe speed limits and establishing training guidelines for defect-characterization schemes.

  13. Relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro: Application of a stratified model

    Science.gov (United States)

    Lee, Kang Il

    2012-08-01

    The present study aims to provide insight into the relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21-0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  14. Migration velocity analysis using pre-stack wave fields

    KAUST Repository

    Alkhalifah, Tariq Ali; Wu, Zedong

    2016-01-01

    Using both image and data domains to perform velocity inversion can help us resolve the long and short wavelength components of the velocity model, usually in that order. This translates to integrating migration velocity analysis into full waveform

  15. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  16. The application of cognitive models to the evaluation and prediction of human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.; Reason, J.T.

    1986-01-01

    The first section of the paper provides a brief overview of a number of important principles relevant to human reliability modeling that have emerged from cognitive models, and presents a synthesis of these approaches in the form of a Generic Error Modeling System (GEMS). The next section illustrates the application of GEMS to some well known nuclear power plant (NPP) incidents in which human error was a major contributor. The way in which design recommendations can emerge from analyses of this type is illustrated. The third section describes the use of cognitive models in the classification of human errors for prediction and data collection purposes. The final section addresses the predictive modeling of human error as part of human reliability assessment in Probabilistic Risk Assessment

  17. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  18. An overview of erosion corrosion models and reliability assessment for corrosion defects in piping system

    International Nuclear Information System (INIS)

    Srividya, A.; Suresh, H.N.; Verma, A.K.; Gopika, V.; Santosh

    2006-01-01

    Piping systems are part of passive structural elements in power plants. The analysis of the piping systems and their quantification in terms of failure probability is of utmost importance. The piping systems may fail due to various degradation mechanisms like thermal fatigue, erosion-corrosion, stress corrosion cracking and vibration fatigue. On examination of previous results, erosion corrosion was more prevalent and wall thinning is a time dependent phenomenon. The paper is intended to consolidate the work done by various investigators on erosion corrosion in estimating the erosion corrosion rate and reliability predictions. A comparison of various erosion corrosion models is made. The reliability predictions based on remaining strength of corroded pipelines by wall thinning is also attempted. Variables in the limit state functions are modelled using normal distributions and Reliability assessment is carried out using some of the existing failure pressure models. A steady state corrosion rate is assumed to estimate the corrosion defect and First Order Reliability Method (FORM) is used to find the probability of failure associated with corrosion defects over time using the software for Component Reliability evaluation (COMREL). (author)

  19. Modeling of liquid ceramic precursor droplets in a high velocity oxy-fuel flame jet

    International Nuclear Information System (INIS)

    Basu, Saptarshi; Cetegen, Baki M.

    2008-01-01

    Production of coatings by high velocity oxy-fuel (HVOF) flame jet processing of liquid precursor droplets can be an attractive alternative method to plasma processing. This article concerns modeling of the thermophysical processes in liquid ceramic precursor droplets injected into an HVOF flame jet. The model consists of several sub-models that include aerodynamic droplet break-up, heat and mass transfer within individual droplets exposed to the HVOF environment and precipitation of ceramic precursors. A parametric study is presented for the initial droplet size, concentration of the dissolved salts and the external temperature and velocity field of the HVOF jet to explore processing conditions and injection parameters that lead to different precipitate morphologies. It is found that the high velocity of the jet induces shear break-up into several μm diameter droplets. This leads to better entrainment and rapid heat-up in the HVOF jet. Upon processing, small droplets (<5 μm) are predicted to undergo volumetric precipitation and form solid particles prior to impact at the deposit location. Droplets larger than 5 μm are predicted to form hollow or precursor containing shells similar to those processed in a DC arc plasma. However, it is found that the lower temperature of the HVOF jet compared to plasma results in slower vaporization and solute mass diffusion time inside the droplet, leading to comparatively thicker shells. These shell-type morphologies may further experience internal pressurization, resulting in possibly shattering and secondary atomization of the trapped liquid. The consequences of these different particle states on the coating microstructure are also discussed in this article

  20. Modeling Energy & Reliability of a CNT based WSN on an HPC Setup

    Directory of Open Access Journals (Sweden)

    Rohit Pathak

    2010-07-01

    Full Text Available We have analyzed the effect of innovations in Nanotechnology on Wireless Sensor Networks (WSN and have modeled Carbon Nanotube (CNT based sensor nodes from a device prospective. A WSN model has been programmed in Simulink-MATLAB and a library has been developed. Integration of CNT in WSN for various modules such as sensors, microprocessors, batteries etc has been shown. Also average energy consumption for the system has been formulated and its reliability has been shown holistically. A proposition has been put forward on the changes needed in existing sensor node structure to improve its efficiency and to facilitate as well as enhance the assimilation of CNT based devices in a WSN. Finally we have commented on the challenges that exist in this technology and described the important factors that need to be considered for calculating reliability. This research will help in practical implementation of CNT based devices and analysis of their key effects on the WSN environment. The work has been executed on Simulink and Distributive Computing toolbox of MATLAB. The proposal has been compared to the recent developments and past experimental results reported in this field. This attempt to derieve the energy consumption and reliability implications will help in development of real devices using CNT which is a major hurdle in bringing the success from lab to commercial market. Recent research in CNT has been used to model an energy efficient model which will also lead to the development CAD tools. Library for Reliability and Energy consumption includes analysis of various parts of a WSN system which is being constructed from CNT. Nano routing in a CNT system is also implemented with its dependencies. Finally the computations were executed on a HPC setup and the model showed remarkable speedup.

  1. Finite element modelling of aluminum alloy 2024-T3 under transverse impact loading

    Science.gov (United States)

    Abdullah, Ahmad Sufian; Kuntjoro, Wahyu; Yamin, A. F. M.

    2017-12-01

    Fiber metal laminate named GLARE is a new aerospace material which has great potential to be widely used in future lightweight aircraft. It consists of aluminum alloy 2024-T3 and glass-fiber reinforced laminate. In order to produce reliable finite element model of impact response or crashworthiness of structure made of GLARE, one can initially model and validate the finite element model of the impact response of its constituents separately. The objective of this study was to develop a reliable finite element model of aluminum alloy 2024-T3 under low velocity transverse impact loading using commercial software ABAQUS. Johnson-Cook plasticity and damage models were used to predict the alloy's material properties and impact behavior. The results of the finite element analysis were compared to the experiment that has similar material and impact conditions. Results showed good correlations in terms of impact forces, deformation and failure progressions which concluded that the finite element model of 2024-T3 aluminum alloy under low velocity transverse impact condition using Johnson-Cook plastic and damage models was reliable.

  2. The Milky Way's Circular Velocity Curve and Its Constraint on the Galactic Mass with RR Lyrae Stars

    Energy Technology Data Exchange (ETDEWEB)

    Ablimit, Iminhaji; Zhao, Gang, E-mail: iminhaji@nao.cas.cn, E-mail: gzhao@nao.cas.cn [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2017-09-01

    We present a sample of 1148 ab-type RR Lyrae (RRLab) variables identified from Catalina Surveys Data Release 1, combined with SDSS DR8 and LAMOST DR4 spectral data. We first use a large sample of 860 Galactic halo RRLab stars and derive the circular velocity distributions for the stellar halo. With the precise distances and carefully determined radial velocities (the center-of-mass radial velocities) and by considering the pulsation of the RRLab stars in our sample, we can obtain a reliable and comparable stellar halo circular velocity curve. We follow two different prescriptions for the velocity anisotropy parameter β in the Jeans equation to study the circular velocity curve and mass profile. Additionally, we test two different solar peculiar motions in our calculation. The best result we obtained with the adopted solar peculiar motion 1 of ( U , V , W ) = (11.1, 12, 7.2) km s{sup −1} is that the enclosed mass of the Milky Way within 50 kpc is (3.75 ± 1.33) × 10{sup 11} M {sub ⊙} based on β = 0 and the circular velocity 180 ± 31.92 (km s{sup −1}) at 50 kpc. This result is consistent with dynamical model results, and it is also comparable to the results of previous similar works.

  3. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  4. Model case IRS-RWE for the determination of reliability data in practical operation

    Energy Technology Data Exchange (ETDEWEB)

    Hoemke, P; Krause, H

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. The paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection are described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results.

  5. A one-dimensional model to describe flow localization in viscoplastic slender bars subjected to super critical impact velocities

    Science.gov (United States)

    Vaz-Romero, A.; Rodríguez-Martínez, J. A.

    2018-01-01

    In this paper we investigate flow localization in viscoplastic slender bars subjected to dynamic tension. We explore loading rates above the critical impact velocity: the wave initiated in the impacted end by the applied velocity is the trigger for the localization of plastic deformation. The problem has been addressed using two kinds of numerical simulations: (1) one-dimensional finite difference calculations and (2) axisymmetric finite element computations. The latter calculations have been used to validate the capacity of the finite difference model to describe plastic flow localization at high impact velocities. The finite difference model, which highlights due to its simplicity, allows to obtain insights into the role played by the strain rate and temperature sensitivities of the material in the process of dynamic flow localization. Specifically, we have shown that viscosity can stabilize the material behavior to the point of preventing the appearance of the critical impact velocity. This is a key outcome of our investigation, which, to the best of the authors' knowledge, has not been previously reported in the literature.

  6. Balance velocities of the Greenland ice sheet

    DEFF Research Database (Denmark)

    Joughin, I.; Fahnestock, M.; Ekholm, Simon

    1997-01-01

    We present a map of balance velocities for the Greenland ice sheet. The resolution of the underlying DEM, which was derived primarily from radar altimetery data, yields far greater detail than earlier balance velocity estimates for Greenland. The velocity contours reveal in striking detail......, the balance map is useful for ice-sheet modelling, mass balance studies, and field planning....

  7. Do downscaled general circulation models reliably simulate historical climatic conditions?

    Science.gov (United States)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  8. A simulation model for reliability evaluation of Space Station power systems

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kumar, Mudit; Wagner, H.

    1988-01-01

    A detailed simulation model for the hybrid Space Station power system is presented which allows photovoltaic and solar dynamic power sources to be mixed in varying proportions. The model considers the dependence of reliability and storage characteristics during the sun and eclipse periods, and makes it possible to model the charging and discharging of the energy storage modules in a relatively accurate manner on a continuous basis.

  9. Waveform inversion of lateral velocity variation from wavefield source location perturbation

    KAUST Repository

    Choi, Yun Seok

    2013-09-22

    It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance, with well log data can be used to update the deep part of the velocity model more precisely. We develop a waveform inversion algorithm to obtain the lateral velocity variation by inverting the wavefield variation associated with the lateral shot location perturbation. The gradient of the new waveform inversion algorithm is obtained by the adjoint-state method. Our inversion algorithm focuses on resolving the lateral changes of the velocity model with respect to a fixed reference vertical velocity profile given by a well log. We apply the method on a simple-dome model to highlight the methods potential.

  10. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  11. Global Plate Velocities from the Global Positioning System

    Science.gov (United States)

    Larson, Kristine M.; Freymueller, Jeffrey T.; Philipsen, Steven

    1997-01-01

    We have analyzed 204 days of Global Positioning System (GPS) data from the global GPS network spanning January 1991 through March 1996. On the basis of these GPS coordinate solutions, we have estimated velocities for 38 sites, mostly located on the interiors of the Africa, Antarctica, Australia, Eurasia, Nazca, North America, Pacific, and South America plates. The uncertainties of the horizontal velocity components range from 1.2 to 5.0 mm/yr. With the exception of sites on the Pacific and Nazca plates, the GPS velocities agree with absolute plate model predictions within 95% confidence. For most of the sites in North America, Antarctica, and Eurasia, the agreement is better than 2 mm/yr. We find no persuasive evidence for significant vertical motions (less than 3 standard deviations), except at four sites. Three of these four were sites constrained to geodetic reference frame velocities. The GPS velocities were then used to estimate angular velocities for eight tectonic plates. Absolute angular velocities derived from the GPS data agree with the no net rotation (NNR) NUVEL-1A model within 95% confidence except for the Pacific plate. Our pole of rotation for the Pacific plate lies 11.5 deg west of the NNR NUVEL-1A pole, with an angular speed 10% faster. Our relative angular velocities agree with NUVEL-1A except for some involving the Pacific plate. While our Pacific-North America angular velocity differs significantly from NUVEL-1A, our model and NUVEL-1A predict very small differences in relative motion along the Pacific-North America plate boundary itself. Our Pacific-Australia and Pacific- Eurasia angular velocities are significantly faster than NUVEL-1A, predicting more rapid convergence at these two plate boundaries. Along the East Pacific Pise, our Pacific-Nazca angular velocity agrees in both rate and azimuth with NUVFL-1A.

  12. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  13. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  14. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstr......Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic...... reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  15. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  16. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  17. Marine traffic model based on cellular automaton: Considering the change of the ship's velocity under the influence of the weather and sea

    Science.gov (United States)

    Qi, Le; Zheng, Zhongyi; Gang, Longhui

    2017-10-01

    It was found that the ships' velocity change, which is impacted by the weather and sea, e.g., wind, sea wave, sea current, tide, etc., is significant and must be considered in the marine traffic model. Therefore, a new marine traffic model based on cellular automaton (CA) was proposed in this paper. The characteristics of the ship's velocity change are taken into account in the model. First, the acceleration of a ship was divided into two components: regular component and random component. Second, the mathematical functions and statistical distribution parameters of the two components were confirmed by spectral analysis, curve fitting and auto-correlation analysis methods. Third, by combining the two components, the acceleration was regenerated in the update rules for ships' movement. To test the performance of the model, the ship traffic flows in the Dover Strait, the Changshan Channel and the Qiongzhou Strait were studied and simulated. The results show that the characteristics of ships' velocities in the simulations are consistent with the measured data by Automatic Identification System (AIS). Although the characteristics of the traffic flow in different areas are different, the velocities of ships can be simulated correctly. It proves that the velocities of ships under the influence of weather and sea can be simulated successfully using the proposed model.

  18. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  19. Creation and Reliability Analysis of Vehicle Dynamic Weighing Model

    Directory of Open Access Journals (Sweden)

    Zhi-Ling XU

    2014-08-01

    Full Text Available In this paper, it is modeled by using ADAMS to portable axle load meter of dynamic weighing system, controlling a single variable simulation weighing process, getting the simulation weighing data under the different speed and weight; simultaneously using portable weighing system with the same parameters to achieve the actual measurement, comparative analysis the simulation results under the same conditions, at 30 km/h or less, the simulation value and the measured value do not differ by more than 5 %, it is not only to verify the reliability of dynamic weighing model, but also to create possible for improving algorithm study efficiency by using dynamic weighing model simulation.

  20. Algorithms for Bayesian network modeling and reliability assessment of infrastructure systems

    International Nuclear Information System (INIS)

    Tien, Iris; Der Kiureghian, Armen

    2016-01-01

    Novel algorithms are developed to enable the modeling of large, complex infrastructure systems as Bayesian networks (BNs). These include a compression algorithm that significantly reduces the memory storage required to construct the BN model, and an updating algorithm that performs inference on compressed matrices. These algorithms address one of the major obstacles to widespread use of BNs for system reliability assessment, namely the exponentially increasing amount of information that needs to be stored as the number of components in the system increases. The proposed compression and inference algorithms are described and applied to example systems to investigate their performance compared to that of existing algorithms. Orders of magnitude savings in memory storage requirement are demonstrated using the new algorithms, enabling BN modeling and reliability analysis of larger infrastructure systems. - Highlights: • Novel algorithms developed for Bayesian network modeling of infrastructure systems. • Algorithm presented to compress information in conditional probability tables. • Updating algorithm presented to perform inference on compressed matrices. • Algorithms applied to example systems to investigate their performance. • Orders of magnitude savings in memory storage requirement demonstrated.

  1. Analyses of Current And Wave Forces on Velocity Caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Buhrkall, Jeppe; Eskesen, Mark C. D.

    2015-01-01

    Velocity caps are often used in connection with for instance offshore intake sea water for the use of for cooling water for power plants or as a source for desalinization plants. The intakes can also be used for river intakes. The velocity cap is placed on top of a vertical pipe. The vertical pipe......) this paper investigates the current and wave forces on the velocity cap and the vertical cylinder. The Morison’s force model was used in the analyses of the extracted force time series in from the CFD model. Further the distribution of the inlet velocities around the velocity cap was also analyzed in detail...

  2. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  3. A reliability model of a warm standby configuration with two identical sets of units

    International Nuclear Information System (INIS)

    Huang, Wei; Loman, James; Song, Thomas

    2015-01-01

    This article presents a new reliability model and the development of its analytical solution for a warm standby redundant configuration with units that are originally operated in active mode, and then, upon turn-on of originally standby units, are put into warm standby mode. These units can be used later if a standby- turned into active-unit fails. Numerical results of an example configuration are presented and discussed with comparison to other warm standby configurations, and to Monte Carlo simulation results obtained from BlockSim software. Results show that the Monte Carlo simulation model gives virtually identical reliability value when the simulation uses a high number of replications, confirming the developed model. - Highlights: • A new reliability model is developed for a warm standby redundancy with two sets of identical units. • The units subject to state change from active to standby then back to active mode. • A closed form analytical solution is developed with exponential distribution. • To validate the developed model, a Monte Carlo simulation for an exemplary configuration is performed

  4. Reliability modeling of degradation of products with multiple performance characteristics based on gamma processes

    International Nuclear Information System (INIS)

    Pan Zhengqiang; Balakrishnan, Narayanaswamy

    2011-01-01

    Many highly reliable products usually have complex structure, with their reliability being evaluated by two or more performance characteristics. In certain physical situations, the degradation of these performance characteristics would be always positive and strictly increasing. In such a case, the gamma process is usually considered as a degradation process due to its independent and non-negative increments properties. In this paper, we suppose that a product has two dependent performance characteristics and that their degradation can be modeled by gamma processes. For such a bivariate degradation involving two performance characteristics, we propose to use a bivariate Birnbaum-Saunders distribution and its marginal distributions to approximate the reliability function. Inferential method for the corresponding model parameters is then developed. Finally, for an illustration of the proposed model and method, a numerical example about fatigue cracks is discussed and some computational results are presented.

  5. On New Cautious Structural Reliability Models in the Framework of imprecise Probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev V.; Kozine, Igor

    2010-01-01

    models and gen-eralizing conventional ones to imprecise probabili-ties. The theoretical setup employed for this purpose is imprecise statistical reasoning (Walley 1991), whose general framework is provided by upper and lower previsions (expectations). The appeal of this theory is its ability to capture......Uncertainty of parameters in engineering design has been modeled in different frameworks such as inter-val analysis, fuzzy set and possibility theories, ran-dom set theory and imprecise probability theory. The authors of this paper for many years have been de-veloping new imprecise reliability...... both aleatory (stochas-tic) and epistemic uncertainty and the flexibility with which information can be represented. The previous research of the authors related to generalizing structural reliability models to impre-cise statistical measures is summarized in Utkin & Kozine (2002) and Utkin (2004...

  6. Artificial Intelligence Estimation of Carotid-Femoral Pulse Wave Velocity using Carotid Waveform.

    Science.gov (United States)

    Tavallali, Peyman; Razavi, Marianne; Pahlevan, Niema M

    2018-01-17

    In this article, we offer an artificial intelligence method to estimate the carotid-femoral Pulse Wave Velocity (PWV) non-invasively from one uncalibrated carotid waveform measured by tonometry and few routine clinical variables. Since the signal processing inputs to this machine learning algorithm are sensor agnostic, the presented method can accompany any medical instrument that provides a calibrated or uncalibrated carotid pressure waveform. Our results show that, for an unseen hold back test set population in the age range of 20 to 69, our model can estimate PWV with a Root-Mean-Square Error (RMSE) of 1.12 m/sec compared to the reference method. The results convey the fact that this model is a reliable surrogate of PWV. Our study also showed that estimated PWV was significantly associated with an increased risk of CVDs.

  7. Reliability and Repetition Effect of the Center of Pressure and Kinematics Parameters That Characterize Trunk Postural Control During Unstable Sitting Test.

    Science.gov (United States)

    Barbado, David; Moreside, Janice; Vera-Garcia, Francisco J

    2017-03-01

    Although unstable seat methodology has been used to assess trunk postural control, the reliability of the variables that characterize it remains unclear. To analyze reliability and learning effect of center of pressure (COP) and kinematic parameters that characterize trunk postural control performance in unstable seating. The relationships between kinematic and COP parameters also were explored. Test-retest reliability design. Biomechanics laboratory setting. Twenty-three healthy male subjects. Participants volunteered to perform 3 sessions at 1-week intervals, each consisting of five 70-second balancing trials. A force platform and a motion capture system were used to measure COP and pelvis, thorax, and spine displacements. Reliability was assessed through standard error of measurement (SEM) and intraclass correlation coefficients (ICC 2,1 ) using 3 methods: (1) comparing the last trial score of each day; (2) comparing the best trial score of each day; and (3) calculating the average of the three last trial scores of each day. Standard deviation and mean velocity were calculated to assess balance performance. Although analyses of variance showed some differences in balance performance between days, these differences were not significant between days 2 and 3. Best result and average methods showed the greatest reliability. Mean velocity of the COP showed high reliability (0.71 reliability (0.37 reliability using the average method (0.62 reliability than kinematics ones. Specifically, mean velocity of COP showed the highest test-retest reliability, especially for the average and best methods. Although correlations between COP and mean joint angular velocity were high, the few relationships between COP and kinematic standard deviation suggest different postural behavior can lead to a similar balance performance during an unstable sitting protocol. III. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights

  8. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Alferidi

    2017-02-01

    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  9. Calculating system reliability with SRFYDO

    Energy Technology Data Exchange (ETDEWEB)

    Morzinski, Jerome [Los Alamos National Laboratory; Anderson - Cook, Christine M [Los Alamos National Laboratory; Klamann, Richard M [Los Alamos National Laboratory

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  10. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system.

    Science.gov (United States)

    Janson, Natalia B; Marsden, Christopher J

    2017-12-05

    It is well known that architecturally the brain is a neural network, i.e. a collection of many relatively simple units coupled flexibly. However, it has been unclear how the possession of this architecture enables higher-level cognitive functions, which are unique to the brain. Here, we consider the brain from the viewpoint of dynamical systems theory and hypothesize that the unique feature of the brain, the self-organized plasticity of its architecture, could represent the means of enabling the self-organized plasticity of its velocity vector field. We propose that, conceptually, the principle of cognition could amount to the existence of appropriate rules governing self-organization of the velocity field of a dynamical system with an appropriate account of stimuli. To support this hypothesis, we propose a simple non-neuromorphic mathematical model with a plastic self-organized velocity field, which has no prototype in physical world. This system is shown to be capable of basic cognition, which is illustrated numerically and with musical data. Our conceptual model could provide an additional insight into the working principles of the brain. Moreover, hardware implementations of plastic velocity fields self-organizing according to various rules could pave the way to creating artificial intelligence of a novel type.

  11. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  12. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  13. Research on cognitive reliability model for main control room considering human factors in nuclear power plants

    International Nuclear Information System (INIS)

    Jiang Jianjun; Zhang Li; Wang Yiqun; Zhang Kun; Peng Yuyuan; Zhou Cheng

    2012-01-01

    Facing the shortcomings of the traditional cognitive factors and cognitive model, this paper presents a Bayesian networks cognitive reliability model by taking the main control room as a reference background and human factors as the key points. The model mainly analyzes the cognitive reliability affected by the human factors, and for the cognitive node and influence factors corresponding to cognitive node, a series of methods and function formulas to compute the node cognitive reliability is proposed. The model and corresponding methods can be applied to the evaluation of cognitive process for the nuclear power plant operators and have a certain significance for the prevention of safety accidents in nuclear power plants. (authors)

  14. Microthrix parvicella abundance associates with activated sludge settling velocity and rheology - Quantifying and modelling filamentous bulking

    DEFF Research Database (Denmark)

    Wágner, Dorottya Sarolta; Ramin, Elham; Szabo, Peter

    2015-01-01

    The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient...... and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational...... viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through...

  15. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  16. Relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro: application of a stratified model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kang Il [Kangwon National University, Chuncheon (Korea, Republic of)

    2012-08-15

    The present study aims to provide insight into the relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21 - 0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  17. Relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro: application of a stratified model

    International Nuclear Information System (INIS)

    Lee, Kang Il

    2012-01-01

    The present study aims to provide insight into the relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21 - 0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  18. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  19. Modelling of nuclear power plant control and instrumentation elements for automatic disturbance and reliability analysis

    International Nuclear Information System (INIS)

    Hollo, E.

    1985-08-01

    Present Final Report summarizes results of R/D work done within IAEA-VEIKI (Institute for Electrical Power Research, Budapest, Hungary) Research Contract No. 3210 during 3 years' period of 01.08.1982 - 31.08.1985. Chapter 1 lists main research objectives of the project. Main results obtained are summarized in Chapters 2 and 3. Outcomes from development of failure modelling methodologies and their application for C/I components of WWER-440 units are as follows (Chapter 2): improvement of available ''failure mode and effect analysis'' methods and mini-fault tree structures usable for automatic disturbance (DAS) and reliability (RAS) analysis; general classification and determination of functional failure modes of WWER-440 NPP C/I components; set up of logic models for motor operated control valves and rod control/drive mechanism. Results of development of methods and their application for reliability modelling of NPP components and systems cover (Chapter 3): development of an algorithm (computer code COMPREL) for component-related failure and reliability parameter calculation; reliability analysis of PAKS II NPP diesel system; definition of functional requirements for reliability data bank (RDB) in WWER-440 units. Determination of RDB input/output data structure and data manipulation services. Methods used are a-priori failure mode and effect analysis, combined fault tree/event tree modelling technique, structural computer programming, probability theory application to nuclear field

  20. Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

    OpenAIRE

    Alaa F. Sheta; Amal Abdel-Raouf

    2016-01-01

    In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of...

  1. Using the Weibull distribution reliability, modeling and inference

    CERN Document Server

    McCool, John I

    2012-01-01

    Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution

  2. Ab initio calculation of the sound velocity of dense hydrogen: implications for models of Jupiter

    NARCIS (Netherlands)

    Alavi, A.; Parrinello, M.; Frenkel, D.

    1995-01-01

    First-principles molecular dynamics simulations were used to calculate the sound velocity of dense hydrogen, and the results were compared with extrapolations of experimental data that currently conflict with either astrophysical models or data obtained from recent global oscillation measurements of

  3. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  4. Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.

    Science.gov (United States)

    Sabour, Siamak; Dastjerdi, Elahe Vahid

    2012-08-20

    Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be

  5. Indentation of aluminium foam at low velocity

    Directory of Open Access Journals (Sweden)

    Shi Xiaopeng

    2015-01-01

    Full Text Available The indentation behaviour of aluminium foams at low velocity (10 m/s ∼ 30 m/s was investigated both in experiments and numerical simulation in this paper. A flat-ended indenter was used and the force-displacement history was recorded. The Split Hopkinson Pressure bar was used to obtain the indentation velocity and forces in the dynamic experiments. Because of the low strength of the aluminium foam, PMMA bar was used, and the experimental data were corrected using Bacon's method. The energy absorption characteristics varying with impact velocity were then obtained. It was found that the energy absorption ability of aluminium foam gradually increases in the quasi-static regime and shows a significant increase at ∼10 m/s velocity. Numerical simulation was also conducted to investigate this process. A 3D Voronoi model was used and models with different relative densities were investigated as well as those with different failure strain. The indentation energy increases with both the relative density and failure strain. The analysis of the FE model implies that the significant change in energy absorption ability of aluminium foam in indentation at ∼10 m/s velocity may be caused by plastic wave effect.

  6. Measuring the Bed Load velocity in Laboratory flumes using ADCP and Digital Cameras

    Science.gov (United States)

    Conevski, Slaven; Guerrero, Massimo; Rennie, Colin; Bombardier, Josselin

    2017-04-01

    Measuring the transport rate and apparent velocity of the bedload is notoriously hard and there is not a certain technique that would obtain continues data. There are many empirical models, based on the estimation of the shear stress, but only few involve direct measurement of the bed load velocity. The bottom tracking (BT) mode of an acoustic Doppler current profiler (ADCP) has been used many times to estimate the apparent velocity of the bed load. Herein is the basic idea, to exploit the bias of the BT signal towards the bed load movement and to calibrate this signal with traditional measuring techniques. These measurements are quite scarce and seldom reliable since there are not taken in controlled conditions. So far, no clear confirmation has been conducted in laboratory-controlled conditions that would attest the assumptions made in the estimation of the apparent bed load velocity, nor in the calibration of the empirical equations. Therefore, this study explores several experiments under stationary conditions, where the signal of the ADCP BT mode is recorded and compared to the bed load motion recorded by digital camera videography. The experiments have been performed in the hydraulic laboratories of Ottawa and Bologna, using two different ADCPs and two different high resolution cameras. In total, more then 30 experiments were performed for different sediment mixtures and different hydraulic conditions. In general, a good match is documented between the apparent bed load velocity measured by the ADCP and the videography. The slight deviation in single experiments can be explained by gravel particles inhomogeneity, difficult in reproducing the same hydro-sedimentological conditions and the randomness of the backscattering strength.

  7. Models for assessing the relative phase velocity in a two-phase flow. Status report

    International Nuclear Information System (INIS)

    Schaffrath, A.; Ringel, H.

    2000-06-01

    The knowledge of slip or drift flux in two phase flow is necessary for several technical processes (e.g. two phase pressure losses, heat and mass transfer in steam generators and condensers, dwell period in chemical reactors, moderation effectiveness of two phase coolant in BWR). In the following the most important models for two phase flow with different phase velocities (e.g. slip or drift models, analogy between pressure loss and steam quality, ε - ε models and models for the calculation of void distribution in reposing fluids) are classified, described and worked up for a further comparison with own experimental data. (orig.)

  8. Iterative reflectivity-constrained velocity estimation for seismic imaging

    Science.gov (United States)

    Masaya, Shogo; Verschuur, D. J. Eric

    2018-03-01

    This paper proposes a reflectivity constraint for velocity estimation to optimally solve the inverse problem for active seismic imaging. This constraint is based on the velocity model derived from the definition of reflectivity and acoustic impedance. The constraint does not require any prior information of the subsurface and large extra computational costs, like the calculation of so-called Hessian matrices. We incorporate this constraint into the Joint Migration Inversion algorithm, which simultaneously estimates both the reflectivity and velocity model of the subsurface in an iterative process. Using so-called full wavefield modeling, the misfit between forward modeled and measured data is minimized. Numerical and field data examples are given to demonstrate the validity of our proposed algorithm in case accurate initial models and the low frequency components of observed seismic data are absent.

  9. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  10. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  11. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  12. Accurate calibration of the velocity-dependent one-scale model for domain walls

    International Nuclear Information System (INIS)

    Leite, A.M.M.; Martins, C.J.A.P.; Shellard, E.P.S.

    2013-01-01

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048 3 , and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c w =0.34±0.16 and k w =0.98±0.07, which are of higher precision than (but in agreement with) earlier estimates.

  13. Accurate calibration of the velocity-dependent one-scale model for domain walls

    Energy Technology Data Exchange (ETDEWEB)

    Leite, A.M.M., E-mail: up080322016@alunos.fc.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Ecole Polytechnique, 91128 Palaiseau Cedex (France); Martins, C.J.A.P., E-mail: Carlos.Martins@astro.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Shellard, E.P.S., E-mail: E.P.S.Shellard@damtp.cam.ac.uk [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2013-01-08

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048{sup 3}, and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c{sub w}=0.34{+-}0.16 and k{sub w}=0.98{+-}0.07, which are of higher precision than (but in agreement with) earlier estimates.

  14. Extremal inversion of lunar travel time data. [seismic velocity structure

    Science.gov (United States)

    Burkhard, N.; Jackson, D. D.

    1975-01-01

    The tau method, developed by Bessonova et al. (1974), of inversion of travel times is applied to lunar P-wave travel time data to find limits on the velocity structure of the moon. Tau is the singular solution to the Clairaut equation. Models with low-velocity zones, with low-velocity zones at differing depths, and without low-velocity zones, were found to be consistent with data and within the determined limits. Models with and without a discontinuity at about 25-km depth have been found which agree with all travel time data to within two standard deviations. In other words, the existence of the discontinuity and its size and location have not been uniquely resolved. Models with low-velocity channels are also possible.

  15. Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks

    International Nuclear Information System (INIS)

    Naimi, Yaghoob; Naimi, Mohammad

    2013-01-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader (SS) and the spreader-stifler (SR) interactions have the same rate α, we define α (1) and α (2) for SS and SR interactions, respectively. The effect of variation of α (1) and α (2) on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor. (interdisciplinary physics and related areas of science and technology)

  16. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  17. Lithospheric structure of the westernmost Mediterranean inferred from finite frequency Rayleigh wave tomography S-velocity model.

    Science.gov (United States)

    Palomeras, Imma; Villasenor, Antonio; Thurner, Sally; Levander, Alan; Gallart, Josep; Harnafi, Mimoun

    2016-04-01

    The Iberian Peninsula and Morocco, separated by the Alboran Sea and the Algerian Basin, constitute the westernmost Mediterranean. From north to south this region consists of the Pyrenees, the result of interaction between the Iberian and Eurasian plates; the Iberian Massif, a region that has been undeformed since the end of the Paleozoic; the Central System and Iberian Chain, regions with intracontinental Oligocene-Miocene deformation; the Gibraltar Arc (Betics, Rif and Alboran terranes) and the Atlas Mountains, resulting from post-Oligocene subduction roll-back and Eurasian-Nubian plate convergence. In this study we analyze data from recent broad-band array deployments and permanent stations on the Iberian Peninsula and in Morocco (Spanish IberArray and Siberia arrays, the US PICASSO array, the University of Munster array, and the Spanish, Portuguese, and Moroccan National Networks) to characterize its lithospheric structure. The combined array of 350 stations has an average interstation spacing of ~60 km, comparable to USArray. We have calculated the Rayleigh waves phase velocities from ambient noise for short periods (4 s to 40 s) and teleseismic events for longer periods (20 s to 167 s). We inverted the phase velocities to obtain a shear velocity model for the lithosphere to ~200 km depth. The model shows differences in the crust for the different areas, where the highest shear velocities are mapped in the Iberian Massif crust. The crustal thickness is highly variable ranging from ~25 km beneath the eastern Betics to ~55km beneath the Gibraltar Strait, Internal Betics and Internal Rif. Beneath this region a unique arc shaped anomaly with high upper mantle velocities (>4.6 km/s) at shallow depths (volcanic fields in Iberia and Morocco, indicative of high temperatures at relatively shallow depths, and suggesting that the lithosphere has been removed beneath these areas

  18. Terminal velocities for a large sample of O stars, B supergiants, and Wolf-Rayet stars

    International Nuclear Information System (INIS)

    Prinja, R.K.; Barlow, M.J.; Howarth, I.D.

    1990-01-01

    It is argued that easily measured, reliable estimates of terminal velocities for early-type stars are provided by the central velocity asymptotically approached by narrow absorption features and by the violet limit of zero residual intensity in saturated P Cygni profiles. These estimators are used to determine terminal velocities, v(infinity), for 181 O stars, 70 early B supergiants, and 35 Wolf-Rayet stars. For OB stars, the values are typically 15-20 percent smaller than the extreme violet edge velocities, v(edge), while for WR stars v(infinity) = 0.76 v(edge) on average. New mass-loss rates for WR stars which are thermal radio emitters are given, taking into account the new terminal velocities and recent revisions to estimates of distances and to the mean nuclear mass per electron. The relationships between v(infinity), the surface escape velocities, and effective temperatures are examined. 67 refs

  19. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  20. Flocking and invariance of velocity angles.

    Science.gov (United States)

    Liu, Le; Huang, Lihong; Wu, Jianhong

    2016-04-01

    Motsch and Tadmor considered an extended Cucker-Smale model to investigate the flocking behavior of self-organized systems of interacting species. In this extended model, a cone of the vision was introduced so that outside the cone the influence of one agent on the other is lost and hence the corresponding influence function takes the value zero. This creates a problem to apply the Motsch-Tadmor and Cucker-Smale method to prove the flocking property of the system. Here, we examine the variation of the velocity angles between two arbitrary agents, and obtain a monotonicity property for the maximum cone of velocity angles. This monotonicity permits us to utilize existing arguments to show the flocking property of the system under consideration, when the initial velocity angles satisfy some minor technical constraints.