WorldWideScience

Sample records for reliable velocity model

  1. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    Science.gov (United States)

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  2. Mean Velocity vs. Mean Propulsive Velocity vs. Peak Velocity: Which Variable Determines Bench Press Relative Load With Higher Reliability?

    Science.gov (United States)

    García-Ramos, Amador; Pestaña-Melero, Francisco L; Pérez-Castilla, Alejandro; Rojas, Francisco J; Gregory Haff, G

    2018-05-01

    García-Ramos, A, Pestaña-Melero, FL, Pérez-Castilla, A, Rojas, FJ, and Haff, GG. Mean velocity vs. mean propulsive velocity vs. peak velocity: which variable determines bench press relative load with higher reliability? J Strength Cond Res 32(5): 1273-1279, 2018-This study aimed to compare between 3 velocity variables (mean velocity [MV], mean propulsive velocity [MPV], and peak velocity [PV]): (a) the linearity of the load-velocity relationship, (b) the accuracy of general regression equations to predict relative load (%1RM), and (c) the between-session reliability of the velocity attained at each percentage of the 1-repetition maximum (%1RM). The full load-velocity relationship of 30 men was evaluated by means of linear regression models in the concentric-only and eccentric-concentric bench press throw (BPT) variants performed with a Smith machine. The 2 sessions of each BPT variant were performed within the same week separated by 48-72 hours. The main findings were as follows: (a) the MV showed the strongest linearity of the load-velocity relationship (median r = 0.989 for concentric-only BPT and 0.993 for eccentric-concentric BPT), followed by MPV (median r = 0.983 for concentric-only BPT and 0.980 for eccentric-concentric BPT), and finally PV (median r = 0.974 for concentric-only BPT and 0.969 for eccentric-concentric BPT); (b) the accuracy of the general regression equations to predict relative load (%1RM) from movement velocity was higher for MV (SEE = 3.80-4.76%1RM) than for MPV (SEE = 4.91-5.56%1RM) and PV (SEE = 5.36-5.77%1RM); and (c) the PV showed the lowest within-subjects coefficient of variation (3.50%-3.87%), followed by MV (4.05%-4.93%), and finally MPV (5.11%-6.03%). Taken together, these results suggest that the MV could be the most appropriate variable for monitoring the relative load (%1RM) in the BPT exercise performed in a Smith machine.

  3. Reliability of force-velocity relationships during deadlift high pull.

    Science.gov (United States)

    Lu, Wei; Boyas, Sébastien; Jubeau, Marc; Rahmani, Abderrahmane

    2017-11-13

    This study aimed to evaluate the within- and between-session reliability of force, velocity and power performances and to assess the force-velocity relationship during the deadlift high pull (DHP). Nine participants performed two identical sessions of DHP with loads ranging from 30 to 70% of body mass. The force was measured by a force plate under the participants' feet. The velocity of the 'body + lifted mass' system was calculated by integrating the acceleration and the power was calculated as the product of force and velocity. The force-velocity relationships were obtained from linear regression of both mean and peak values of force and velocity. The within- and between-session reliability was evaluated by using coefficients of variation (CV) and intraclass correlation coefficients (ICC). Results showed that DHP force-velocity relationships were significantly linear (R² > 0.90, p  0.94), mean and peak velocities showed a good agreement (CV reliable and can therefore be utilised as a tool to characterise individuals' muscular profiles.

  4. The Reliability of Individualized Load-Velocity Profiles.

    Science.gov (United States)

    Banyard, Harry G; Nosaka, K; Vernon, Alex D; Haff, G Gregory

    2017-11-15

    This study examined the reliability of peak velocity (PV), mean propulsive velocity (MPV), and mean velocity (MV) in the development of load-velocity profiles (LVP) in the full depth free-weight back squat performed with maximal concentric effort. Eighteen resistance-trained men performed a baseline one-repetition maximum (1RM) back squat trial and three subsequent 1RM trials used for reliability analyses, with 48-hours interval between trials. 1RM trials comprised lifts from six relative loads including 20, 40, 60, 80, 90, and 100% 1RM. Individualized LVPs for PV, MPV, or MV were derived from loads that were highly reliable based on the following criteria: intra-class correlation coefficient (ICC) >0.70, coefficient of variation (CV) ≤10%, and Cohen's d effect size (ES) 0.05) between trials, movement velocities, or between linear regression versus second order polynomial fits. PV 20-100% , MPV 20-90% , and MV 20-90% are reliable and can be utilized to develop LVPs using linear regression. Conceptually, LVPs can be used to monitor changes in movement velocity and employed as a method for adjusting sessional training loads according to daily readiness.

  5. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  6. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  7. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  8. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  9. [Measurements of blood velocities using duplex sonography in carotid artery stents: analysis of reliability in an in-vitro model and computational fluid dynamics (CFD)].

    Science.gov (United States)

    Schönwald, U G; Jorczyk, U; Kipfmüller, B

    2011-01-01

    Stents are commonly used for the treatment of occlusive artery diseases in carotid arteries. Today, there is a controversial discussion as to whether duplex sonography (DS) displays blood velocities (BV) that are too high in stented areas. The goal of this study was to evaluate the effect of stenting on DS with respect to BV in artificial carotid arteries. The results of computational fluid dynamics (CFD) were also used for the comparison. To analyze BV using DS, a phantom with a constant flow (70 cm/s) was created. Three different types of stents for carotid arteries were selected. The phantom fluid consisted of 67 % water and 33 % glycerol. All BV measurements were carried out on the last third of the stents. Furthermore, all test runs were simulated using CFD. All measurements were statistically analyzed. DS-derived BV values increased significantly after the placement of the Palmaz Genesis stent (77.6 ± 4.92 cm/sec, p = 0.03). A higher increase in BV values was registered when using the Precise RX stent (80.1 ± 2.01 cm/sec, p CFD simulations showed similar results. Stents have a significant impact on BV, but no effect on DS. The main factor of the blood flow acceleration is the material thickness of the stents. Therefore, different stents need different velocity criteria. Furthermore, the results of computational fluid dynamics prove that CFD can be used to simulate BV in stented silicone tubes. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Efficient Algorithm for a k-out-of-N System Reliability Modeling-Case Study: Pitot Sensors System for Aircraft Velocity

    Directory of Open Access Journals (Sweden)

    Wajih Ezzeddine

    2017-08-01

    Full Text Available The k-out-of-N system is widely applied in several industrial systems. This structure is a part of fault-tolerant systems for which both parallel and series systems are special cases. Because of the importance of industrial systems reliability determination for production and maintenance management purposes, a number of techniques and methods are incorporated to formulate and estimate its analytic expression. In this paper, an algorithm is put forward for a k-out-of-N system with identical components under information about the influence factors that affect the system efficiency. The developed approach is applied in the case of the Pitot sensors system. However, the algorithm application could be generalized for any device which during a mission is subject to environmental and operational factors that affect its degradation process.

  11. Test-retest reliability of barbell velocity during the free-weight bench-press exercise.

    Science.gov (United States)

    Stock, Matt S; Beck, Travis W; DeFreitas, Jason M; Dillon, Michael A

    2011-01-01

    The purpose of this study was to calculate test-retest reliability statistics for peak barbell velocity during the free-weight bench-press exercise for loads corresponding to 10-90% of the 1-repetition maximum (1RM). Twenty-one healthy, resistance-trained men (mean ± SD age = 23.5 ± 2.7 years; body mass = 90.5 ± 14.6 kg; 1RM bench press = 125.4 ± 18.4 kg) volunteered for this study. A minimum of 48 hours after a maximal strength testing and familiarization session, the subjects performed single repetitions of the free-weight bench-press exercise at each tenth percentile (10-90%) of the 1RM on 2 separate occasions. For each repetition, the subjects were instructed to press the barbell as rapidly as possible, and peak barbell velocity was measured with a Tendo Weightlifting Analyzer. The test-retest intraclass correlation coefficients (model 2,1) and corresponding standard errors of measurement (expressed as percentages of the mean barbell velocity values) were 0.717 (4.2%), 0.572 (5.0%), 0.805 (3.1%), 0.669 (4.7%), 0.790 (4.6%), 0.785 (4.8%), 0.811 (5.8%), 0.714 (10.3%), and 0.594 (12.6%) for the weights corresponding to 10-90% 1RM. There were no mean differences between the barbell velocity values from trials 1 and 2. These results indicated moderate to high test-retest reliability for barbell velocity from 10 to 70% 1RM but decreased consistency at 80 and 90% 1RM. When examining barbell velocity during the free-weight bench-press exercise, greater measurement error must be overcome at 80 and 90% 1RM to be confident that an observed change is meaningful.

  12. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  13. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  14. Reliability Overhaul Model

    Science.gov (United States)

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  15. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    Science.gov (United States)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications

  16. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...

  17. Reliability of power and velocity variables collected during the traditional and ballistic bench press exercise.

    Science.gov (United States)

    García-Ramos, Amador; Haff, G Gregory; Padial, Paulino; Feriche, Belén

    2018-03-01

    This study aimed to examine the reliability of different power and velocity variables during the Smith machine bench press (BP) and bench press throw (BPT) exercises. Twenty-two healthy men conducted four testing sessions after a preliminary BP one-repetition maximum (1RM) test. In a counterbalanced order, participants performed two sessions of BP in one week and two sessions of BPT in another week. Mean propulsive power, peak power, mean propulsive velocity, and peak velocity at each tenth percentile (20-70% of 1RM) were recorded by a linear transducer. The within-participants coefficient of variation (CV) was higher for the load-power relationship compared to the load-velocity relationship in both the BP (5.3% vs. 4.1%; CV ratio = 1.29) and BPT (4.7% vs. 3.4%; CV ratio = 1.38). Mean propulsive variables showed lower reliability than peak variables in both the BP (5.4% vs. 4.0%, CV ratio = 1.35) and BPT (4.8% vs. 3.3%, CV ratio = 1.45). All variables were deemed reliable, with the peak velocity demonstrating the lowest within-participants CV. Based upon these findings, the peak velocity should be chosen for the accurate assessment of BP and BPT performance.

  18. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  19. Reliability in the Rasch Model

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára, K.

    2007-01-01

    Roč. 43, č. 3 (2007), s. 315-326 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : Cronbach's alpha * Rasch model * reliability Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.552, year: 2007 http://dml.cz/handle/10338.dmlcz/135776

  20. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  1. Reliability and Validity of the Load-Velocity Relationship to Predict the 1RM Back Squat.

    Science.gov (United States)

    Banyard, Harry G; Nosaka, Kazunori; Haff, G Gregory

    2017-07-01

    Banyard, HG, Nosaka, K, and Haff, GG. Reliability and validity of the load-velocity relationship to predict the 1RM back squat. J Strength Cond Res 31(7): 1897-1904, 2017-This study investigated the reliability and validity of the load-velocity relationship to predict the free-weight back squat one repetition maximum (1RM). Seventeen strength-trained males performed three 1RM assessments on 3 separate days. All repetitions were performed to full depth with maximal concentric effort. Predicted 1RMs were calculated by entering the mean concentric velocity of the 1RM (V1RM) into an individualized linear regression equation, which was derived from the load-velocity relationship of 3 (20, 40, 60% of 1RM), 4 (20, 40, 60, 80% of 1RM), or 5 (20, 40, 60, 80, 90% of 1RM) incremental warm-up sets. The actual 1RM (140.3 ± 27.2 kg) was very stable between 3 trials (ICC = 0.99; SEM = 2.9 kg; CV = 2.1%; ES = 0.11). Predicted 1RM from 5 warm-up sets up to and including 90% of 1RM was the most reliable (ICC = 0.92; SEM = 8.6 kg; CV = 5.7%; ES = -0.02) and valid (r = 0.93; SEE = 10.6 kg; CV = 7.4%; ES = 0.71) of the predicted 1RM methods. However, all predicted 1RMs were significantly different (p ≤ 0.05; ES = 0.71-1.04) from the actual 1RM. Individual variation for the actual 1RM was small between trials ranging from -5.6 to 4.8% compared with the most accurate predictive method up to 90% of 1RM, which was more variable (-5.5 to 27.8%). Importantly, the V1RM (0.24 ± 0.06 m·s) was unreliable between trials (ICC = 0.42; SEM = 0.05 m·s; CV = 22.5%; ES = 0.14). The load-velocity relationship for the full depth free-weight back squat showed moderate reliability and validity but could not accurately predict 1RM, which was stable between trials. Thus, the load-velocity relationship 1RM prediction method used in this study cannot accurately modify sessional training loads because of large V1RM variability.

  2. Validity and reliability of simple measurement device to assess the velocity of the barbell during squats.

    Science.gov (United States)

    Lorenzetti, Silvio; Lamparter, Thomas; Lüthy, Fabian

    2017-12-06

    The velocity of a barbell can provide important insights on the performance of athletes during strength training. The aim of this work was to assess the validity and reliably of four simple measurement devices that were compared to 3D motion capture measurements during squatting. Nine participants were assessed when performing 2 × 5 traditional squats with a weight of 70% of the 1 repetition maximum and ballistic squats with a weight of 25 kg. Simultaneously, data was recorded from three linear position transducers (T-FORCE, Tendo Power and GymAware), an accelerometer based system (Myotest) and a 3D motion capture system (Vicon) as the Gold Standard. Correlations between the simple measurement devices and 3D motion capture of the mean and the maximal velocity of the barbell, as well as the time to maximal velocity, were calculated. The correlations during traditional squats were significant and very high (r = 0.932, 0.990, p squats and was less accurate. All the linear position transducers were able to assess squat performance, particularly during traditional squats and especially in terms of mean velocity and time to maximal velocity.

  3. Reliability of performance velocity for jump squats under feedback and nonfeedback conditions.

    Science.gov (United States)

    Randell, Aaron D; Cronin, John B; Keogh, Justin Wl; Gill, Nicholas D; Pedersen, Murray C

    2011-12-01

    Randell, AD, Cronin, JB, Keogh, JWL, Gill, ND, and Pedersen, MC. Reliability of performance velocity for jump squats under feedback and nonfeedback conditions. J Strength Cond Res 25(12): 3514-3518, 2011-Advancements in the monitoring of kinematic and kinetic variables during resistance training have resulted in the ability to continuously monitor performance and provide feedback during training. If equipment and software can provide reliable instantaneous feedback related to the variable of interest during training, it is thought that this may result in goal-oriented movement tasks that increase the likelihood of transference to on-field performance or at the very least improve the mechanical variable of interest. The purpose of this study was to determine the reliability of performance velocity for jump squats under feedback and nonfeedback conditions over 3 consecutive training sessions. Twenty subjects were randomly allocated to a feedback or nonfeedback group, and each group performed a total of 3 "jump squat" training sessions with the velocity of each repetition measured using a linear position transducer. There was less change in mean velocities between sessions 1-2 and sessions 2-3 (0.07 and 0.02 vs. 0.13 and -0.04 m·s), less random variation (TE = 0.06 and 0.06 vs. 0.10 and 0.07 m·s) and greater consistency (intraclass correlation coefficient = 0.83 and 0.87 vs. 0.53 and 0.74) between sessions for the feedback condition as compared to the nonfeedback condition. It was concluded that there is approximately a 50-50 probability that the provision of feedback was beneficial to the performance in the squat jump over multiple sessions. It is suggested that this has the potential for increasing transference to on-field performance or at the very least improving the mechanical variable of interest.

  4. Optimal velocity difference model for a car-following theory

    International Nuclear Information System (INIS)

    Peng, G.H.; Cai, X.H.; Liu, C.Q.; Cao, B.F.; Tuo, M.X.

    2011-01-01

    In this Letter, we present a new optimal velocity difference model for a car-following theory based on the full velocity difference model. The linear stability condition of the new model is obtained by using the linear stability theory. The unrealistically high deceleration does not appear in OVDM. Numerical simulation of traffic dynamics shows that the new model can avoid the disadvantage of negative velocity occurred at small sensitivity coefficient λ in full velocity difference model by adjusting the coefficient of the optimal velocity difference, which shows that collision can disappear in the improved model. -- Highlights: → A new optimal velocity difference car-following model is proposed. → The effects of the optimal velocity difference on the stability of traffic flow have been explored. → The starting and braking process were carried out through simulation. → The effects of the optimal velocity difference can avoid the disadvantage of negative velocity.

  5. Cardiac magnetic resonance: is phonocardiogram gating reliable in velocity-encoded phase contrast imaging?

    International Nuclear Information System (INIS)

    Nassenstein, Kai; Schlosser, Thomas; Orzada, Stephan; Ladd, Mark E.; Maderwald, Stefan; Haering, Lars; Czylwik, Andreas; Jensen, Christoph; Bruder, Oliver

    2012-01-01

    To assess the diagnostic accuracy of phonocardiogram (PCG) gated velocity-encoded phase contrast magnetic resonance imaging (MRI). Flow quantification above the aortic valve was performed in 68 patients by acquiring a retrospectively PCG- and a retrospectively ECG-gated velocity-encoded GE-sequence at 1.5 T. Peak velocity (PV), average velocity (AV), forward volume (FV), reverse volume (RV), net forward volume (NFV), as well as the regurgitant fraction (RF) were assessed for both datasets, as well as for the PCG-gated datasets after compensation for the PCG trigger delay. PCG-gated image acquisition was feasible in 64 patients, ECG-gated in all patients. PCG-gated flow quantification overestimated PV (Δ 3.8 ± 14.1 cm/s; P = 0.037) and underestimated FV (Δ -4.9 ± 15.7 ml; P = 0.015) and NFV (Δ -4.5 ± 16.5 ml; P = 0.033) compared with ECG-gated imaging. After compensation for the PCG trigger delay, differences were only observed for PV (Δ 3.8 ± 14.1 cm/s; P = 0.037). Wide limits of agreement between PCG- and ECG-gated flow quantification were observed for all variables (PV: -23.9 to 31.4 cm/s; AV: -4.5 to 3.9 cm/s; FV: -35.6 to 25.9 ml; RV: -8.0 to 7.2 ml; NFV: -36.8 to 27.8 ml; RF: -10.4 to 10.2 %). The present study demonstrates that PCG gating in its current form is not reliable enough for flow quantification based on velocity-encoded phase contrast gradient echo (GE) sequences. (orig.)

  6. Reliability of Phase Velocity Measurements of Flexural Acoustic Waves in the Human Tibia In-Vivo.

    Science.gov (United States)

    Vogl, Florian; Schnüriger, Karin; Gerber, Hans; Taylor, William R

    2016-01-01

    Axial-transmission acoustics have shown to be a promising technique to measure individual bone properties and detect bone pathologies. With the ultimate goal being the in-vivo application of such systems, quantification of the key aspects governing the reliability is crucial to bring this method towards clinical use. This work presents a systematic reliability study quantifying the sources of variability and their magnitudes of in-vivo measurements using axial-transmission acoustics. 42 healthy subjects were measured by an experienced operator twice per week, over a four-month period, resulting in over 150000 wave measurements. In a complementary study to assess the influence of different operators performing the measurements, 10 novice operators were trained, and each measured 5 subjects on a single occasion, using the same measurement protocol as in the first part of the study. The estimated standard error for the measurement protocol used to collect the study data was ∼ 17 m/s (∼ 4% of the grand mean) and the index of dependability, as a measure of reliability, was Φ = 0.81. It was shown that the method is suitable for multi-operator use and that the reliability can be improved efficiently by additional measurements with device repositioning, while additional measurements without repositioning cannot improve the reliability substantially. Phase velocity values were found to be significantly higher in males than in females (p < 10-5) and an intra-class correlation coefficient of r = 0.70 was found between the legs of each subject. The high reliability of this non-invasive approach and its intrinsic sensitivity to mechanical properties opens perspectives for the rapid and inexpensive clinical assessment of bone pathologies, as well as for monitoring programmes without any radiation exposure for the patient.

  7. Development of reliable pavement models.

    Science.gov (United States)

    2011-05-01

    The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...

  8. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  9. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  10. Reliability models for Space Station power system

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kim, Y.; Wagner, H.

    1987-01-01

    This paper presents a methodology for the reliability evaluation of Space Station power system. The two options considered are the photovoltaic system and the solar dynamic system. Reliability models for both of these options are described along with the methodology for calculating the reliability indices.

  11. Reliability and continuous regeneration model

    Directory of Open Access Journals (Sweden)

    Anna Pavlisková

    2006-06-01

    Full Text Available The failure-free function of an object is very important for the service. This leads to the interest in the determination of the object reliability and failure intensity. The reliability of an element is defined by the theory of probability.The element durability T is a continuous random variate with the probability density f. The failure intensity (tλ is a very important reliability characteristics of the element. Often it is an increasing function, which corresponds to the element ageing. We disposed of the data about a belt conveyor failures recorded during the period of 90 months. The given ses behaves according to the normal distribution. By using a mathematical analysis and matematical statistics, we found the failure intensity function (tλ. The function (tλ increases almost linearly.

  12. Modelling low velocity impact induced damage in composite laminates

    Science.gov (United States)

    Shi, Yu; Soutis, Constantinos

    2017-12-01

    The paper presents recent progress on modelling low velocity impact induced damage in fibre reinforced composite laminates. It is important to understand the mechanisms of barely visible impact damage (BVID) and how it affects structural performance. To reduce labour intensive testing, the development of finite element (FE) techniques for simulating impact damage becomes essential and recent effort by the composites research community is reviewed in this work. The FE predicted damage initiation and propagation can be validated by Non Destructive Techniques (NDT) that gives confidence to the developed numerical damage models. A reliable damage simulation can assist the design process to optimise laminate configurations, reduce weight and improve performance of components and structures used in aircraft construction.

  13. A new car-following model considering velocity anticipation

    International Nuclear Information System (INIS)

    Jun-Fang, Tian; Bin, Jia; Xin-Gang, Li; Zi-You, Gao

    2010-01-01

    The full velocity difference model proposed by Jiang et al. [2001 Phys. Rev. E 64 017101] has been improved by introducing velocity anticipation. Velocity anticipation means the follower estimates the future velocity of the leader. The stability condition of the new model is obtained by using the linear stability theory. Theoretical results show that the stability region increases when we increase the anticipation time interval. The mKdV equation is derived to describe the kink–antikink soliton wave and obtain the coexisting stability line. The delay time of car motion and kinematic wave speed at jam density are obtained in this model. Numerical simulations exhibit that when we increase the anticipation time interval enough, the new model could avoid accidents under urgent braking cases. Also, the traffic jam could be suppressed by considering the anticipation velocity. All results demonstrate that this model is an improvement on the full velocity difference model. (general)

  14. Reliability Modeling of Double Beam Bridge Crane

    Science.gov (United States)

    Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li

    2018-05-01

    This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.

  15. The Limit Deposit Velocity model, a new approach

    Directory of Open Access Journals (Sweden)

    Miedema Sape A.

    2015-12-01

    Full Text Available In slurry transport of settling slurries in Newtonian fluids, it is often stated that one should apply a line speed above a critical velocity, because blow this critical velocity there is the danger of plugging the line. There are many definitions and names for this critical velocity. It is referred to as the velocity where a bed starts sliding or the velocity above which there is no stationary bed or sliding bed. Others use the velocity where the hydraulic gradient is at a minimum, because of the minimum energy consumption. Most models from literature are one term one equation models, based on the idea that the critical velocity can be explained that way.

  16. Development of vortex model with realistic axial velocity distribution

    International Nuclear Information System (INIS)

    Ito, Kei; Ezure, Toshiki; Ohshima, Hiroyuki

    2014-01-01

    A vortex is considered as one of significant phenomena which may cause gas entrainment (GE) and/or vortex cavitation in sodium-cooled fast reactors. In our past studies, the vortex is assumed to be approximated by the well-known Burgers vortex model. However, the Burgers vortex model has a simple but unreal assumption that the axial velocity component is horizontally constant, while in real the free surface vortex has the axial velocity distribution which shows large gradient in radial direction near the vortex center. In this study, a new vortex model with realistic axial velocity distribution is proposed. This model is derived from the steady axisymmetric Navier-Stokes equation as well as the Burgers vortex model, but the realistic axial velocity distribution in radial direction is considered, which is defined to be zero at the vortex center and to approach asymptotically to zero at infinity. As the verification, the new vortex model is applied to the evaluation of a simple vortex experiment, and shows good agreements with the experimental data in terms of the circumferential velocity distribution and the free surface shape. In addition, it is confirmed that the Burgers vortex model fails to calculate accurate velocity distribution with the assumption of uniform axial velocity. However, the calculation accuracy of the Burgers vortex model can be enhanced close to that of the new vortex model in consideration of the effective axial velocity which is calculated as the average value only in the vicinity of the vortex center. (author)

  17. Handwriting Velocity Modeling by Artificial Neural Networks

    OpenAIRE

    Mohamed Aymen Slim; Afef Abdelkrim; Mohamed Benrejeb

    2014-01-01

    The handwriting is a physical demonstration of a complex cognitive process learnt by man since his childhood. People with disabilities or suffering from various neurological diseases are facing so many difficulties resulting from problems located at the muscle stimuli (EMG) or signals from the brain (EEG) and which arise at the stage of writing. The handwriting velocity of the same writer or different writers varies according to different criteria: age, attitude, mood, wr...

  18. An Extended Optimal Velocity Model with Consideration of Honk Effect

    International Nuclear Information System (INIS)

    Tang Tieqiao; Li Chuanyao; Huang Haijun; Shang Huayan

    2010-01-01

    Based on the OV (optimal velocity) model, we in this paper present an extended OV model with the consideration of the honk effect. The analytical and numerical results illustrate that the honk effect can improve the velocity and flow of uniform flow but that the increments are relevant to the density. (interdisciplinary physics and related areas of science and technology)

  19. A classical model explaining the OPERA velocity paradox

    CERN Document Server

    Broda, Boguslaw

    2011-01-01

    In the context of the paradoxical results of the OPERA Collaboration, we have proposed a classical mechanics model yielding the statistically measured velocity of a beam higher than the velocity of the particles constituting the beam. Ingredients of our model necessary to obtain this curious result are a non-constant fraction function and the method of the maximum-likelihood estimation.

  20. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... and uncertainties are quantified. Further, estimation of annual failure probability for structural components taking into account possible faults in electrical or mechanical systems is considered. For a representative structural failure mode, a probabilistic model is developed that incorporates grid loss failures...

  1. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, we use an IFR distribution to develop a reliability model for the EBS

  2. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, an IFR distribution is used to develop a reliability model for the EBS

  3. Towards a reliable animal model of migraine

    DEFF Research Database (Denmark)

    Olesen, Jes; Jansen-Olesen, Inger

    2012-01-01

    The pharmaceutical industry shows a decreasing interest in the development of drugs for migraine. One of the reasons for this could be the lack of reliable animal models for studying the effect of acute and prophylactic migraine drugs. The infusion of glyceryl trinitrate (GTN) is the best validated...... and most studied human migraine model. Several attempts have been made to transfer this model to animals. The different variants of this model are discussed as well as other recent models....

  4. Uncertainty assessment of 3D instantaneous velocity model from stack velocities

    Science.gov (United States)

    Emanuele Maesano, Francesco; D'Ambrogi, Chiara

    2015-04-01

    3D modelling is a powerful tool that is experiencing increasing applications in data analysis and dissemination. At the same time the need of quantitative uncertainty evaluation is strongly requested in many aspects of the geological sciences and by the stakeholders. In many cases the starting point for 3D model building is the interpretation of seismic profiles that provide indirect information about the geology of the subsurface in the domain of time. The most problematic step in the 3D modelling construction is the conversion of the horizons and faults interpreted in time domain to the depth domain. In this step the dominant variable that could lead to significantly different results is the velocity. The knowledge of the subsurface velocities is related mainly to punctual data (sonic logs) that are often sparsely distributed in the areas covered by the seismic interpretation. The extrapolation of velocity information to wide extended horizons is thus a critical step to obtain a 3D model in depth that can be used for predictive purpose. In the EU-funded GeoMol Project, the availability of a dense network of seismic lines (confidentially provided by ENI S.p.A.) in the Central Po Plain, is paired with the presence of 136 well logs, but few of them have sonic logs and in some portion of the area the wells are very widely spaced. The depth conversion of the 3D model in time domain has been performed testing different strategies for the use and the interpolation of velocity data. The final model has been obtained using a 4 layer cake 3D instantaneous velocity model that considers both the initial velocity (v0) in every reference horizon and the gradient of velocity variation with depth (k). Using this method it is possible to consider the geological constraint given by the geometries of the horizons and the geo-statistical approach to the interpolation of velocities and gradient. Here we present an experiment based on the use of set of pseudo-wells obtained from the

  5. Space Vehicle Reliability Modeling in DIORAMA

    Energy Technology Data Exchange (ETDEWEB)

    Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-12

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  6. Validity and Reliability of a Wearable Inertial Sensor to Measure Velocity and Power in the Back Squat and Bench Press.

    Science.gov (United States)

    Orange, Samuel T; Metcalfe, James W; Liefeith, Andreas; Marshall, Phil; Madden, Leigh A; Fewster, Connor R; Vince, Rebecca V

    2018-05-08

    Orange, ST, Metcalfe, JW, Liefeith, A, Marshall, P, Madden, LA, Fewster, CR, and Vince, RV. Validity and reliability of a wearable inertial sensor to measure velocity and power in the back squat and bench press. J Strength Cond Res XX(X): 000-000, 2018-This study examined the validity and reliability of a wearable inertial sensor to measure velocity and power in the free-weight back squat and bench press. Twenty-nine youth rugby league players (18 ± 1 years) completed 2 test-retest sessions for the back squat followed by 2 test-retest sessions for the bench press. Repetitions were performed at 20, 40, 60, 80, and 90% of 1 repetition maximum (1RM) with mean velocity, peak velocity, mean power (MP), and peak power (PP) simultaneously measured using an inertial sensor (PUSH) and a linear position transducer (GymAware PowerTool). The PUSH demonstrated good validity (Pearson's product-moment correlation coefficient [r]) and reliability (intraclass correlation coefficient [ICC]) only for measurements of MP (r = 0.91; ICC = 0.83) and PP (r = 0.90; ICC = 0.80) at 20% of 1RM in the back squat. However, it may be more appropriate for athletes to jump off the ground with this load to optimize power output. Further research should therefore evaluate the usability of inertial sensors in the jump squat exercise. In the bench press, good validity and reliability were evident only for the measurement of MP at 40% of 1RM (r = 0.89; ICC = 0.83). The PUSH was unable to provide a valid and reliable estimate of any other criterion variable in either exercise. Practitioners must be cognizant of the measurement error when using inertial sensor technology to quantify velocity and power during resistance training, particularly with loads other than 20% of 1RM in the back squat and 40% of 1RM in the bench press.

  7. A phenomenological retention tank model using settling velocity distributions.

    Science.gov (United States)

    Maruejouls, T; Vanrolleghem, P A; Pelletier, G; Lessard, P

    2012-12-15

    Many authors have observed the influence of the settling velocity distribution on the sedimentation process in retention tanks. However, the pollutants' behaviour in such tanks is not well characterized, especially with respect to their settling velocity distribution. This paper presents a phenomenological modelling study dealing with the way by which the settling velocity distribution of particles in combined sewage changes between entering and leaving an off-line retention tank. The work starts from a previously published model (Lessard and Beck, 1991) which is first implemented in a wastewater management modelling software, to be then tested with full-scale field data for the first time. Next, its performance is improved by integrating the particle settling velocity distribution and adding a description of the resuspension due to pumping for emptying the tank. Finally, the potential of the improved model is demonstrated by comparing the results for one more rain event. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Validity and Reliability of the PUSH Wearable Device to Measure Movement Velocity During the Back Squat Exercise.

    Science.gov (United States)

    Balsalobre-Fernández, Carlos; Kuzdub, Matt; Poveda-Ortiz, Pedro; Campo-Vecino, Juan Del

    2016-07-01

    Balsalobre-Fernández, C, Kuzdub, M, Poveda-Ortiz, P, and Campo-Vecino, Jd. Validity and reliability of the PUSH wearable device to measure movement velocity during the back squat exercise. J Strength Cond Res 30(7): 1968-1974, 2016-The purpose of this study was to analyze the validity and reliability of a wearable device to measure movement velocity during the back squat exercise. To do this, 10 recreationally active healthy men (age = 23.4 ± 5.2 years; back squat 1 repetition maximum [1RM] = 83 ± 8.2 kg) performed 3 repetitions of the back squat exercise with 5 different loads ranging from 25 to 85% 1RM on a Smith Machine. Movement velocity for each of the total 150 repetitions was simultaneously recorded using the T-Force linear transducer (LT) and the PUSH wearable band. Results showed a high correlation between the LT and the wearable device mean (r = 0.85; standard error of estimate [SEE] = 0.08 m·s) and peak velocity (r = 0.91, SEE = 0.1 m·s). Moreover, there was a very high agreement between these 2 devices for the measurement of mean (intraclass correlation coefficient [ICC] = 0.907) and peak velocity (ICC = 0.944), although a systematic bias between devices was observed (PUSH peak velocity being -0.07 ± 0.1 m·s lower, p ≤ 0.05). When measuring the 3 repetitions with each load, both devices displayed almost equal reliability (Test-retest reliability: LT [r = 0.98], PUSH [r = 0.956]; ICC: LT [ICC = 0.989], PUSH [ICC = 0.981]; coefficient of variation [CV]: LT [CV = 4.2%], PUSH [CV = 5.0%]). Finally, individual load-velocity relationships measured with both the LT (R = 0.96) and the PUSH wearable device (R = 0.94) showed similar, very high coefficients of determination. In conclusion, these results support the use of an affordable wearable device to track velocity during back squat training. Wearable devices, such as the one in this study, could have valuable practical applications for strength and conditioning coaches.

  9. Flood Water Crossing: Laboratory Model Investigations for Water Velocity Reductions

    Directory of Open Access Journals (Sweden)

    Kasnon N.

    2014-01-01

    Full Text Available The occurrence of floods may give a negative impact towards road traffic in terms of difficulties in mobilizing traffic as well as causing damage to the vehicles, which later cause them to be stuck in the traffic and trigger traffic problems. The high velocity of water flows occur when there is no existence of objects capable of diffusing the water velocity on the road surface. The shape, orientation and size of the object to be placed beside the road as a diffuser are important for the effective flow attenuation of water. In order to investigate the water flow, a laboratory experiment was set up and models were constructed to study the flow velocity reduction. The velocity of water before and after passing through the diffuser objects was investigated. This paper focuses on laboratory experiments to determine the flow velocity of the water using sensors before and after passing through two best diffuser objects chosen from a previous flow pattern experiment.

  10. Overcoming some limitations of imprecise reliability models

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2011-01-01

    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time ...

  11. Assessment of isometric muscle strength and rate of torque development with hand-held dynamometry: Test-retest reliability and relationship with gait velocity after stroke.

    Science.gov (United States)

    Mentiplay, Benjamin F; Tan, Dawn; Williams, Gavin; Adair, Brooke; Pua, Yong-Hao; Bower, Kelly J; Clark, Ross A

    2018-04-27

    Isometric rate of torque development examines how quickly force can be exerted and may resemble everyday task demands more closely than isometric strength. Rate of torque development may provide further insight into the relationship between muscle function and gait following stroke. Aims of this study were to examine the test-retest reliability of hand-held dynamometry to measure isometric rate of torque development following stroke, to examine associations between strength and rate of torque development, and to compare the relationships of strength and rate of torque development to gait velocity. Sixty-three post-stroke adults participated (60 years, 34 male). Gait velocity was assessed using the fast-paced 10 m walk test. Isometric strength and rate of torque development of seven lower-limb muscle groups were assessed with hand-held dynamometry. Intraclass correlation coefficients were calculated for reliability and Spearman's rho correlations were calculated for associations. Regression analyses using partial F-tests were used to compare strength and rate of torque development in their relationship with gait velocity. Good to excellent reliability was shown for strength and rate of torque development (0.82-0.97). Strong associations were found between strength and rate of torque development (0.71-0.94). Despite high correlations between strength and rate of torque development, rate of torque development failed to provide significant value to regression models that already contained strength. Assessment of isometric rate of torque development with hand-held dynamometry is reliable following stroke, however isometric strength demonstrated greater relationships with gait velocity. Further research should examine the relationship between dynamic measures of muscle strength/torque and gait after stroke. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  13. Car Deceleration Considering Its Own Velocity in Cellular Automata Model

    International Nuclear Information System (INIS)

    Li Keping

    2006-01-01

    In this paper, we propose a new cellular automaton model, which is based on NaSch traffic model. In our method, when a car has a larger velocity, if the gap between the car and its leading car is not enough large, it will decrease. The aim is that the following car has a buffer space to decrease its velocity at the next time, and then avoid to decelerate too high. The simulation results show that using our model, the car deceleration is realistic, and is closer to the field measure than that of NaSch model.

  14. Three dimensional reflection velocity analysis based on velocity model scan; Model scan ni yoru sanjigen hanshaha sokudo kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Minegishi, M; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)

    1996-05-01

    Introduced herein is a reflection wave velocity analysis method using model scanning as a method for velocity estimation across a section, the estimation being useful in the construction of a velocity structure model in seismic exploration. In this method, a stripping type analysis is carried out, wherein optimum structure parameters are determined for reflection waves one after the other beginning with those from shallower parts. During this process, the velocity structures previously determined for the shallower parts are fixed and only the lowest of the layers undergoing analysis at the time is subjected to model scanning. To consider the bending of ray paths at each velocity boundaries involving shallower parts, the ray path tracing method is utilized for the calculation of the reflection travel time curve for the reflection surface being analyzed. Out of the reflection wave travel time curves calculated using various velocity structure models, one that suits best the actual reflection travel time is detected. The degree of matching between the calculated result and actual result is measured by use of data semblance in a time window provided centering about the calculated reflective wave travel time. The structure parameter is estimated on the basis of conditions for the maximum semblance. 1 ref., 4 figs.

  15. Centralized Bayesian reliability modelling with sensor networks

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 19, č. 5 (2013), s. 471-482 ISSN 1387-3954 R&D Projects: GA MŠk 7D12004 Grant - others:GA MŠk(CZ) SVV-265315 Keywords : Bayesian modelling * Sensor network * Reliability Subject RIV: BD - Theory of Information Impact factor: 0.984, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0392551.pdf

  16. Welding wire velocity modelling and control using an optical sensor

    DEFF Research Database (Denmark)

    Nielsen, Kirsten M.; Pedersen, Tom S.

    2007-01-01

    In this paper a method for controlling the velocity of a welding wire at the tip of the handle is described. The method is an alternative to the traditional welding apparatus control system where the wire velocity is controlled internal in the welding machine implying a poor disturbance reduction....... To obtain the tip velocity a dynamic model of the wire/liner system is developed and verified.  In the wire/liner system it turned out that backlash and reflections are influential factors. An idea for handling the backlash has been suggested. In addition an optical sensor for measuring the wire velocity...... at the tip has been constructed. The optical sensor may be used but some problems due to focusing cause noise in the control loop demanding a more precise mechanical wire feed system or an optical sensor with better focusing characteristics....

  17. Stochastic models in reliability and maintenance

    CERN Document Server

    2002-01-01

    Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main­ tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...

  18. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  19. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  20. A generic model for the shallow velocity structure of volcanoes

    Science.gov (United States)

    Lesage, Philippe; Heap, Michael J.; Kushnir, Alexandra

    2018-05-01

    The knowledge of the structure of volcanoes and of the physical properties of volcanic rocks is of paramount importance to the understanding of volcanic processes and the interpretation of monitoring observations. However, the determination of these structures by geophysical methods suffers limitations including a lack of resolution and poor precision. Laboratory experiments provide complementary information on the physical properties of volcanic materials and their behavior as a function of several parameters including pressure and temperature. Nevertheless combined studies and comparisons of field-based geophysical and laboratory-based physical approaches remain scant in the literature. Here, we present a meta-analysis which compares 44 seismic velocity models of the shallow structure of eleven volcanoes, laboratory velocity measurements on about one hundred rock samples from five volcanoes, and seismic well-logs from deep boreholes at two volcanoes. The comparison of these measurements confirms the strong variability of P- and S-wave velocities, which reflects the diversity of volcanic materials. The values obtained from laboratory experiments are systematically larger than those provided by seismic models. This discrepancy mainly results from scaling problems due to the difference between the sampled volumes. The averages of the seismic models are characterized by very low velocities at the surface and a strong velocity increase at shallow depth. By adjusting analytical functions to these averages, we define a generic model that can describe the variations in P- and S-wave velocities in the first 500 m of andesitic and basaltic volcanoes. This model can be used for volcanoes where no structural information is available. The model can also account for site time correction in hypocenter determination as well as for site and path effects that are commonly observed in volcanic structures.

  1. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  2. Delayed hydride cracking: theoretical model testing to predict cracking velocity

    International Nuclear Information System (INIS)

    Mieza, Juan I.; Vigna, Gustavo L.; Domizzi, Gladys

    2009-01-01

    Pressure tubes from Candu nuclear reactors as any other component manufactured with Zr alloys are prone to delayed hydride cracking. That is why it is important to be able to predict the cracking velocity during the component lifetime from parameters easy to be measured, such as: hydrogen concentration, mechanical and microstructural properties. Two of the theoretical models reported in literature to calculate the DHC velocity were chosen and combined, and using the appropriate variables allowed a comparison with experimental results of samples from Zr-2.5 Nb tubes with different mechanical and structural properties. In addition, velocities measured by other authors in irradiated materials could be reproduced using the model described above. (author)

  3. Shallow and deep crustal velocity models of Northeast Tibet

    Science.gov (United States)

    Karplus, M.; Klemperer, S. L.; Mechie, J.; Shi, D.; Zhao, W.; Brown, L. D.; Wu, Z.

    2009-12-01

    The INDEPTH IV seismic profile in Northeast Tibet is the highest resolution wide-angle refraction experiment imaging the Qaidam Basin, North Kunlun Thrusts (NKT), Kunlun Mountains, North and South Kunlun Faults (NKT, SKT), and Songpan-Ganzi terrane (SG). First arrival refraction modeling using ray tracing and least squares inversion has yielded a crustal p-wave velocity model, best resolved for the top 20 km. Ray tracing of deeper reflections shows considerable differences between the Qaidam Basin and the SG, in agreement with previous studies of those areas. The Moho ranges from about 52 km beneath the Qaidam Basin to 63 km with a slight northward dip beneath the SG. The 11-km change must occur between the SKF and the southern edge of the Qaidam Basin, just north of the NKT, allowing the possibility of a Moho step across the NKT. The Qaidam Basin velocity-versus-depth profile is more similar to the global average than the SG profile, which bears resemblance to previously determined “Tibet-type” velocity profiles with mid to lower crustal velocities of 6.5 to 7.0 km/s appearing at greater depths. The highest resolution portion of the profile (100-m instrument spacing) features two distinct, apparently south-dipping low-velocity zones reaching about 2-3 km depth that we infer to be the locations of the NKF and SKF. A strong reflector at 35 km, located entirely south of the SKF and truncated just south of it, may be cut by a steeply south-dipping SKF. Elevated velocities at depth beneath the surface location of the NKF may indicate the south-dipping NKF meets the SKF between depths of 5 and 10 km. Undulating regions of high and low velocity extending about 1-2 km in depth near the southern border of the Qaidam Basin likely represent north-verging thrust sheets of the NKT.

  4. A nonlinear inversion for the velocity background and perturbation models

    KAUST Repository

    Wu, Zedong

    2015-08-19

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI) by inverting for the single scattered wavefield obtained using an image. However, current RWI methods usually neglect diving waves, which is an important source of information for extracting the long wavelength components of the velocity model. Thus, we propose a new optimization problem through breaking the velocity model into the background and the perturbation in the wave equation directly. In this case, the perturbed model is no longer the single scattering model, but includes all scattering. We optimize both components simultaneously, and thus, the objective function is nonlinear with respect to both the background and perturbation. The new introduced w can absorb the non-smooth update of background naturally. Application to the Marmousi model with frequencies that start at 5 Hz shows that this method can converge to the accurate velocity starting from a linearly increasing initial velocity. Application to the SEG2014 demonstrates the versatility of the approach.

  5. An investigation of FLUENT's fan model including the effect of swirl velocity

    International Nuclear Information System (INIS)

    El Saheli, A.; Barron, R.M.

    2002-01-01

    The purpose of this paper is to investigate and discuss the reliability of simplified models for the computational fluid dynamics (CFD) simulation of air flow through automotive engine cooling fans. One of the most widely used simplified fan models in industry is a variant of the actuator disk model which is available in most commercial CFD software, such as FLUENT. In this model, the fan is replaced by an infinitely thin surface on which pressure rise across the fan is specified as a polynomial function of normal velocity or flow rate. The advantages of this model are that it is simple, it accurately predicts the pressure rise through the fan and the axial velocity, and it is robust

  6. Modeling and Velocity Tracking Control for Tape Drive System ...

    African Journals Online (AJOL)

    Modeling and Velocity Tracking Control for Tape Drive System. ... Journal of Applied Sciences and Environmental Management ... The result of the study revealed that 7.07, 8 and 10 of koln values met the design goal and also resulted in optimal control performance with the following characteristics 7.31%,7.71% , 9.41% ...

  7. A new settling velocity model to describe secondary sedimentation.

    Science.gov (United States)

    Ramin, Elham; Wágner, Dorottya S; Yde, Lars; Binning, Philip J; Rasmussen, Michael R; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-12-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM(ZS). In addition, correlations between the Herschel-Bulkley rheological model parameters and sludge concentration were identified with data from batch rheological experiments. A 2-D axisymmetric CFD model of a circular SST containing the new settling velocity and rheological model was validated with full-scale measurements. Finally, it was shown that the representation of compression settling in the CFD model can significantly influence the prediction of sludge distribution in the SSTs under dry- and wet-weather flow conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A new settling velocity model to describe secondary sedimentation

    DEFF Research Database (Denmark)

    Ramin, Elham; Wágner, Dorottya Sarolta; Yde, Lars

    2014-01-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids...... distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges...... associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM...

  9. A model relating Eulerian spatial and temporal velocity correlations

    Science.gov (United States)

    Cholemari, Murali R.; Arakeri, Jaywant H.

    2006-03-01

    In this paper we propose a model to relate Eulerian spatial and temporal velocity autocorrelations in homogeneous, isotropic and stationary turbulence. We model the decorrelation as the eddies of various scales becoming decorrelated. This enables us to connect the spatial and temporal separations required for a certain decorrelation through the ‘eddy scale’. Given either the spatial or the temporal velocity correlation, we obtain the ‘eddy scale’ and the rate at which the decorrelation proceeds. This leads to a spatial separation from the temporal correlation and a temporal separation from the spatial correlation, at any given value of the correlation relating the two correlations. We test the model using experimental data from a stationary axisymmetric turbulent flow with homogeneity along the axis.

  10. A new approach for modeling dry deposition velocity of particles

    Science.gov (United States)

    Giardina, M.; Buffa, P.

    2018-05-01

    The dry deposition process is recognized as an important pathway among the various removal processes of pollutants in the atmosphere. In this field, there are several models reported in the literature useful to predict the dry deposition velocity of particles of different diameters but many of them are not capable of representing dry deposition phenomena for several categories of pollutants and deposition surfaces. Moreover, their applications is valid for specific conditions and if the data in that application meet all of the assumptions required of the data used to define the model. In this paper a new dry deposition velocity model based on an electrical analogy schema is proposed to overcome the above issues. The dry deposition velocity is evaluated by assuming that the resistances that affect the particle flux in the Quasi-Laminar Sub-layers can be combined to take into account local features of the mutual influence of inertial impact processes and the turbulent one. Comparisons with the experimental data from literature indicate that the proposed model allows to capture with good agreement the main dry deposition phenomena for the examined environmental conditions and deposition surfaces to be determined. The proposed approach could be easily implemented within atmospheric dispersion modeling codes and efficiently addressing different deposition surfaces for several particle pollution.

  11. Modeling delamination of FRP laminates under low velocity impact

    Science.gov (United States)

    Jiang, Z.; Wen, H. M.; Ren, S. L.

    2017-09-01

    Fiber reinforced plastic laminates (FRP) have been increasingly used in various engineering such as aeronautics, astronautics, transportation, naval architecture and their impact response and failure are a major concern in academic community. A new numerical model is suggested for fiber reinforced plastic composites. The model considers that FRP laminates has been constituted by unidirectional laminated plates with adhesive layers. A modified adhesive layer damage model that considering strain rate effects is incorporated into the ABAQUS / EXPLICIT finite element program by the user-defined material subroutine VUMAT. It transpires that the present model predicted delamination is in good agreement with the experimental results for low velocity impact.

  12. Velocity profiles in idealized model of human respiratory tract

    Science.gov (United States)

    Elcner, J.; Jedelsky, J.; Lizal, F.; Jicha, M.

    2013-04-01

    This article deals with numerical simulation focused on velocity profiles in idealized model of human upper airways during steady inspiration. Three r gimes of breathing were investigated: Resting condition, Deep breathing and Light activity which correspond to most common regimes used for experiments and simulations. Calculation was validated with experimental data given by Phase Doppler Anemometry performed on the model with same geometry. This comparison was made in multiple points which form one cross-section in trachea near first bifurcation of bronchial tree. Development of velocity profile in trachea during steady inspiration was discussed with respect for common phenomenon formed in trachea and for future research of transport of aerosol particles in human respiratory tract.

  13. Velocity profiles in idealized model of human respiratory tract

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available This article deals with numerical simulation focused on velocity profiles in idealized model of human upper airways during steady inspiration. Three r gimes of breathing were investigated: Resting condition, Deep breathing and Light activity which correspond to most common regimes used for experiments and simulations. Calculation was validated with experimental data given by Phase Doppler Anemometry performed on the model with same geometry. This comparison was made in multiple points which form one cross-section in trachea near first bifurcation of bronchial tree. Development of velocity profile in trachea during steady inspiration was discussed with respect for common phenomenon formed in trachea and for future research of transport of aerosol particles in human respiratory tract.

  14. Estimation of spatial uncertainties of tomographic velocity models

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, M.; Du, Z.; Querendez, E. [SINTEF Petroleum Research, Trondheim (Norway)

    2012-12-15

    This research project aims to evaluate the possibility of assessing the spatial uncertainties in tomographic velocity model building in a quantitative way. The project is intended to serve as a test of whether accurate and specific uncertainty estimates (e.g., in meters) can be obtained. The project is based on Monte Carlo-type perturbations of the velocity model as obtained from the tomographic inversion guided by diagonal and off-diagonal elements of the resolution and the covariance matrices. The implementation and testing of this method was based on the SINTEF in-house stereotomography code, using small synthetic 2D data sets. To test the method the calculation and output of the covariance and resolution matrices was implemented, and software to perform the error estimation was created. The work included the creation of 2D synthetic data sets, the implementation and testing of the software to conduct the tests (output of the covariance and resolution matrices which are not implicitly provided by stereotomography), application to synthetic data sets, analysis of the test results, and creating the final report. The results show that this method can be used to estimate the spatial errors in tomographic images quantitatively. The results agree with' the known errors for our synthetic models. However, the method can only be applied to structures in the model, where the change of seismic velocity is larger than the predicted error of the velocity parameter amplitudes. In addition, the analysis is dependent on the tomographic method, e.g., regularization and parameterization. The conducted tests were very successful and we believe that this method could be developed further to be applied to third party tomographic images.

  15. Modeling human reliability analysis using MIDAS

    International Nuclear Information System (INIS)

    Boring, R. L.

    2006-01-01

    This paper documents current efforts to infuse human reliability analysis (HRA) into human performance simulation. The Idaho National Laboratory is teamed with NASA Ames Research Center to bridge the SPAR-H HRA method with NASA's Man-machine Integration Design and Analysis System (MIDAS) for use in simulating and modeling the human contribution to risk in nuclear power plant control room operations. It is anticipated that the union of MIDAS and SPAR-H will pave the path for cost-effective, timely, and valid simulated control room operators for studying current and next generation control room configurations. This paper highlights considerations for creating the dynamic HRA framework necessary for simulation, including event dependency and granularity. This paper also highlights how the SPAR-H performance shaping factors can be modeled in MIDAS across static, dynamic, and initiator conditions common to control room scenarios. This paper concludes with a discussion of the relationship of the workload factors currently in MIDAS and the performance shaping factors in SPAR-H. (authors)

  16. Small velocity and finite temperature variations in kinetic relaxation models

    KAUST Repository

    Markowich, Peter; Jü ngel, Ansgar; Aoki, Kazuo

    2010-01-01

    A small Knuden number analysis of a kinetic equation in the diffusive scaling is performed. The collision kernel is of BGK type with a general local Gibbs state. Assuming that the flow velocity is of the order of the Knudsen number, a Hilbert expansion yields a macroscopic model with finite temperature variations, whose complexity lies in between the hydrodynamic and the energy-transport equations. Its mathematical structure is explored and macroscopic models for specific examples of the global Gibbs state are presented. © American Institute of Mathematical Sciences.

  17. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  18. Predicted and measured velocity distribution in a model heat exchanger

    International Nuclear Information System (INIS)

    Rhodes, D.B.; Carlucci, L.N.

    1984-01-01

    This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries

  19. Measured and modeled dry deposition velocities over the ESCOMPTE area

    Science.gov (United States)

    Michou, M.; Laville, P.; Serça, D.; Fotiadi, A.; Bouchou, P.; Peuch, V.-H.

    2005-03-01

    Measurements of the dry deposition velocity of ozone have been made by the eddy correlation method during ESCOMPTE (Etude sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions). The strong local variability of natural ecosystems was sampled over several weeks in May, June and July 2001 for four sites with varying surface characteristics. The sites included a maize field, a Mediterranean forest, a Mediterranean shrub-land, and an almost bare soil. Measurements of nitrogen oxide deposition fluxes by the relaxed eddy correlation method have also been carried out at the same bare soil site. An evaluation of the deposition velocities computed by the surface module of the multi-scale Chemistry and Transport Model MOCAGE is presented. This module relies on a resistance approach, with a detailed treatment of the stomatal contribution to the surface resistance. Simulations at the finest model horizontal resolution (around 10 km) are compared to observations. If the seasonal variations are in agreement with the literature, comparisons between raw model outputs and observations, at the different measurement sites and for the specific observing periods, are contrasted. As the simulated meteorology at the scale of 10 km nicely captures the observed situations, the default set of surface characteristics (averaged at the resolution of a grid cell) appears to be one of the main reasons for the discrepancies found with observations. For each case, sensitivity studies have been performed in order to see the impact of adjusting the surface characteristics to the observed ones, when available. Generally, a correct agreement with the observations of deposition velocities is obtained. This advocates for a sub-grid scale representation of surface characteristics for the simulation of dry deposition velocities over such a complex area. Two other aspects appear in the discussion. Firstly, the strong influence of the soil water content to the plant

  20. Human reliability data collection and modelling

    International Nuclear Information System (INIS)

    1991-09-01

    The main purpose of this document is to review and outline the current state-of-the-art of the Human Reliability Assessment (HRA) used for quantitative assessment of nuclear power plants safe and economical operation. Another objective is to consider Human Performance Indicators (HPI) which can alert plant manager and regulator to departures from states of normal and acceptable operation. These two objectives are met in the three sections of this report. The first objective has been divided into two areas, based on the location of the human actions being considered. That is, the modelling and data collection associated with control room actions are addressed first in chapter 1 while actions outside the control room (including maintenance) are addressed in chapter 2. Both chapters 1 and 2 present a brief outline of the current status of HRA for these areas, and major outstanding issues. Chapter 3 discusses HPI. Such performance indicators can signal, at various levels, changes in factors which influence human performance. The final section of this report consists of papers presented by the participants of the Technical Committee Meeting. A separate abstract was prepared for each of these papers. Refs, figs and tabs

  1. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  2. Hydrodynamic Equations for Flocking Models without Velocity Alignment

    Science.gov (United States)

    Peruani, Fernando

    2017-10-01

    The spontaneous emergence of collective motion patterns is usually associated with the presence of a velocity alignment mechanism that mediates the interactions among the moving individuals. Despite of this widespread view, it has been shown recently that several flocking behaviors can emerge in the absence of velocity alignment and as a result of short-range, position-based, attractive forces that act inside a vision cone. Here, we derive the corresponding hydrodynamic equations of a microscopic position-based flocking model, reviewing and extending previous reported results. In particular, we show that three distinct macroscopic collective behaviors can be observed: i) the coarsening of aggregates with no orientational order, ii) the emergence of static, elongated nematic bands, and iii) the formation of moving, locally polar structures, which we call worms. The derived hydrodynamic equations indicate that active particles interacting via position-based interactions belong to a distinct class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems.

  3. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  4. Model-assisted measurements of suspension-feeding flow velocities.

    Science.gov (United States)

    Du Clos, Kevin T; Jones, Ian T; Carrier, Tyler J; Brady, Damian C; Jumars, Peter A

    2017-06-01

    Benthic marine suspension feeders provide an important link between benthic and pelagic ecosystems. The strength of this link is determined by suspension-feeding rates. Many studies have measured suspension-feeding rates using indirect clearance-rate methods, which are based on the depletion of suspended particles. Direct methods that measure the flow of water itself are less common, but they can be more broadly applied because, unlike indirect methods, direct methods are not affected by properties of the cleared particles. We present pumping rates for three species of suspension feeders, the clams Mya arenaria and Mercenaria mercenaria and the tunicate Ciona intestinalis , measured using a direct method based on particle image velocimetry (PIV). Past uses of PIV in suspension-feeding studies have been limited by strong laser reflections that interfere with velocity measurements proximate to the siphon. We used a new approach based on fitting PIV-based velocity profile measurements to theoretical profiles from computational fluid dynamic (CFD) models, which allowed us to calculate inhalant siphon Reynolds numbers ( Re ). We used these inhalant Re and measurements of siphon diameters to calculate exhalant Re , pumping rates, and mean inlet and outlet velocities. For the three species studied, inhalant Re ranged from 8 to 520, and exhalant Re ranged from 15 to 1073. Volumetric pumping rates ranged from 1.7 to 7.4 l h -1 for M . arenaria , 0.3 to 3.6 l h -1 for M . m ercenaria and 0.07 to 0.97 l h -1 for C . intestinalis We also used CFD models based on measured pumping rates to calculate capture regions, which reveal the spatial extent of pumped water. Combining PIV data with CFD models may be a valuable approach for future suspension-feeding studies. © 2017. Published by The Company of Biologists Ltd.

  5. Mean velocity and moments of turbulent velocity fluctuations in the wake of a model ship propulsor

    Energy Technology Data Exchange (ETDEWEB)

    Pego, J.P. [Universitaet Erlangen-Nuernberg, LSTM, Erlangen, Lehrstuhl fuer Stroemungsmechanik, Erlangen (Germany); Faculdade de Engenharia da Universidade do Porto, Porto (Portugal); Lienhart, H.; Durst, F. [Universitaet Erlangen-Nuernberg, LSTM, Erlangen, Lehrstuhl fuer Stroemungsmechanik, Erlangen (Germany)

    2007-08-15

    Pod drives are modern outboard ship propulsion systems with a motor encapsulated in a watertight pod, whose shaft is connected directly to one or two propellers. The whole unit hangs from the stern of the ship and rotates azimuthally, thus providing thrust and steering without the need of a rudder. Force/momentum and phase-resolved laser Doppler anemometry (LDA) measurements were performed for in line co-rotating and contra-rotating propellers pod drive models. The measurements permitted to characterize these ship propulsion systems in terms of their hydrodynamic characteristics. The torque delivered to the propellers and the thrust of the system were measured for different operation conditions of the propellers. These measurements lead to the hydrodynamic optimization of the ship propulsion system. The parameters under focus revealed the influence of distance between propeller planes, propeller frequency of rotation ratio and type of propellers (co- or contra-rotating) on the overall efficiency of the system. Two of the ship propulsion systems under consideration were chosen, based on their hydrodynamic characteristics, for a detailed study of the swirling wake flow by means of laser Doppler anemometry. A two-component laser Doppler system was employed for the velocity measurements. A light barrier mounted on the axle of the rear propeller motor supplied a TTL signal to mark the beginning of each period, thus providing angle information for the LDA measurements. Measurements were conducted for four axial positions in the slipstream of the pod drive models. The results show that the wake of contra-rotating propeller is more homogeneous than when they co-rotate. In agreement with the results of the force/momentum measurements and with hypotheses put forward in the literature (see e.g. Poehls in Entwurfsgrundlagen fuer Schraubenpropeller, 1984; Schneekluth in Hydromechanik zum Schiffsentwurf, 1988; Breslin and Andersen in Hydrodynamics of ship propellers, 1996

  6. Mean velocity and moments of turbulent velocity fluctuations in the wake of a model ship propulsor

    Science.gov (United States)

    Pêgo, J. P.; Lienhart, H.; Durst, F.

    2007-08-01

    Pod drives are modern outboard ship propulsion systems with a motor encapsulated in a watertight pod, whose shaft is connected directly to one or two propellers. The whole unit hangs from the stern of the ship and rotates azimuthally, thus providing thrust and steering without the need of a rudder. Force/momentum and phase-resolved laser Doppler anemometry (LDA) measurements were performed for in line co-rotating and contra-rotating propellers pod drive models. The measurements permitted to characterize these ship propulsion systems in terms of their hydrodynamic characteristics. The torque delivered to the propellers and the thrust of the system were measured for different operation conditions of the propellers. These measurements lead to the hydrodynamic optimization of the ship propulsion system. The parameters under focus revealed the influence of distance between propeller planes, propeller frequency of rotation ratio and type of propellers (co- or contra-rotating) on the overall efficiency of the system. Two of the ship propulsion systems under consideration were chosen, based on their hydrodynamic characteristics, for a detailed study of the swirling wake flow by means of laser Doppler anemometry. A two-component laser Doppler system was employed for the velocity measurements. A light barrier mounted on the axle of the rear propeller motor supplied a TTL signal to mark the beginning of each period, thus providing angle information for the LDA measurements. Measurements were conducted for four axial positions in the slipstream of the pod drive models. The results show that the wake of contra-rotating propeller is more homogeneous than when they co-rotate. In agreement with the results of the force/momentum measurements and with hypotheses put forward in the literature (see e.g. Poehls in Entwurfsgrundlagen für Schraubenpropeller, 1984; Schneekluth in Hydromechanik zum Schiffsentwurf, 1988; Breslin and Andersen in Hydrodynamics of ship propellers, 1996

  7. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  8. Stabilization and Riesz basis property for an overhead crane model with feedback in velocity and rotating velocity

    Directory of Open Access Journals (Sweden)

    Toure K. Augustin

    2014-06-01

    Full Text Available This paper studies a variant of an overhead crane model's problem, with a control force in velocity and rotating velocity on the platform. We obtain under certain conditions the well-posedness and the strong stabilization of the closed-loop system. We then analyze the spectrum of the system. Using a method due to Shkalikov, we prove the existence of a sequence of generalized eigenvectors of the system, which forms a Riesz basis for the state energy Hilbert space.

  9. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  10. Reliability Model of Power Transformer with ONAN Cooling

    OpenAIRE

    M. Sefidgaran; M. Mirzaie; A. Ebrahimzadeh

    2010-01-01

    Reliability of a power system is considerably influenced by its equipments. Power transformers are one of the most critical and expensive equipments of a power system and their proper functions are vital for the substations and utilities. Therefore, reliability model of power transformer is very important in the risk assessment of the engineering systems. This model shows the characteristics and functions of a transformer in the power system. In this paper the reliability model...

  11. Traveling waves in an optimal velocity model of freeway traffic

    Science.gov (United States)

    Berg, Peter; Woods, Andrew

    2001-03-01

    Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].

  12. RadVel: The Radial Velocity Modeling Toolkit

    Science.gov (United States)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-04-01

    RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.

  13. A detonation model of high/low velocity detonation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Shaoming; Li, Chenfang; Ma, Yunhua; Cui, Junmin [Xian Modern Chemistry Research Institute, Xian, 710065 (China)

    2007-02-15

    A new detonation model that can simulate both high and low velocity detonations is established using the least action principle. The least action principle is valid for mechanics and thermodynamics associated with a detonation process. Therefore, the least action principle is valid in detonation science. In this model, thermodynamic equilibrium state is taken as the known final point of the detonation process. Thermodynamic potentials are analogous to mechanical ones, and the Lagrangian function in the detonation process is L=T-V. Under certain assumptions, the variation calculus of the Lagrangian function gives two solutions: the first one is a constant temperature solution, and the second one is the solution of an ordinary differential equation. A special solution of the ordinary differential equation is given. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  14. Shallow velocity model in the area of Pozzo Pitarrone, Mt. Etna, from single station, array methods and borehole data.

    OpenAIRE

    Zuccarello, L.; Paratore, M.; Ferrari, F.; Messina, A.; Branca, S.; Contrafatto, D.; Galluzzo, D.; Rapisarda, S.; La Rocca, M.

    2016-01-01

    Seismic noise recorded by a temporary array installed around Pozzo Pitarrone, NE flank of Mt. Etna, have been analysed with several techniques. Single station HVSR method and SPAC array method have been applied to stationary seismic noise to investigate the local shallow structure. The inversion of dispersion curves produced a shear wave velocity model of the area reliable down to depth of about 130 m. A comparison of such model with the stratigraphic information available for the investigate...

  15. Uncertainty estimation of the velocity model for stations of the TrigNet GPS network

    Science.gov (United States)

    Hackl, M.; Malservisi, R.; Hugentobler, U.

    2010-12-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that error models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is computationally expensive and is usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies, which allows for a reliable estimation of the velocity error. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Small differences may originate from non-normal distribution of the noise.

  16. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  17. Models on reliability of non-destructive testing

    International Nuclear Information System (INIS)

    Simola, K.; Pulkkinen, U.

    1998-01-01

    The reliability of ultrasonic inspections has been studied in e.g. international PISC (Programme for the Inspection of Steel Components) exercises. These exercises have produced a large amount of information on the effect of various factors on the reliability of inspections. The information obtained from reliability experiments are used to model the dependency of flaw detection probability on various factors and to evaluate the performance of inspection equipment, including the sizing accuracy. The information from experiments is utilised in a most effective way when mathematical models are applied. Here, some statistical models for reliability of non-destructive tests are introduced. In order to demonstrate the use of inspection reliability models, they have been applied to the inspection results of intergranular stress corrosion cracking (IGSCC) type flaws in PISC III exercise (PISC 1995). The models are applied to both flaw detection frequency data of all inspection teams and to flaw sizing data of one participating team. (author)

  18. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  19. A possibilistic uncertainty model in classical reliability theory

    International Nuclear Information System (INIS)

    De Cooman, G.; Capelle, B.

    1994-01-01

    The authors argue that a possibilistic uncertainty model can be used to represent linguistic uncertainty about the states of a system and of its components. Furthermore, the basic properties of the application of this model to classical reliability theory are studied. The notion of the possibilistic reliability of a system or a component is defined. Based on the concept of a binary structure function, the important notion of a possibilistic function is introduced. It allows to calculate the possibilistic reliability of a system in terms of the possibilistic reliabilities of its components

  20. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  1. Shear wave crustal velocity model of the Western Bohemian Massif from Love wave phase velocity dispersion

    Czech Academy of Sciences Publication Activity Database

    Kolínský, Petr; Málek, Jiří; Brokešová, J.

    2011-01-01

    Roč. 15, č. 1 (2011), s. 81-104 ISSN 1383-4649 R&D Projects: GA AV ČR IAA300460602; GA AV ČR IAA300460705; GA ČR(CZ) GA205/06/1780 Institutional research plan: CEZ:AV0Z30460519 Keywords : love waves * phase velocity dispersion * frequency-time analysis Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.326, year: 2011 www.springerlink.com/content/w3149233l60111t1/

  2. Developing Fast and Reliable Flood Models

    DEFF Research Database (Denmark)

    Thrysøe, Cecilie; Toke, Jens; Borup, Morten

    2016-01-01

    . A surrogate model is set up for a case study area in Aarhus, Denmark, to replace a MIKE FLOOD model. The drainage surrogates are able to reproduce the MIKE URBAN results for a set of rain inputs. The coupled drainage-surface surrogate model lacks details in the surface description which reduces its overall...... accuracy. The model shows no instability, hence larger time steps can be applied, which reduces the computational time by more than a factor 1400. In conclusion, surrogate models show great potential for usage in urban water modelling....

  3. Test-retest reliability of knee extensor rate of velocity and power development in older adults using the isotonic mode on a Biodex System 3 dynamometer.

    Science.gov (United States)

    Van Driessche, Stijn; Van Roie, Evelien; Vanwanseele, Benedicte; Delecluse, Christophe

    2018-01-01

    Isotonic testing and measures of rapid power production are emerging as functionally relevant test methods for detection of muscle aging. Our objective was to assess reliability of rapid velocity and power measures in older adults using the isotonic mode of an isokinetic dynamometer. Sixty-three participants (aged 65 to 82 years) underwent a test-retest protocol with one week time interval. Isotonic knee extension tests were performed at four different loads: 0%, 25%, 50% and 75% of maximal isometric strength. Peak velocity (pV) and power (pP) were determined as the highest values of the velocity and power curve. Rate of velocity (RVD) and power development (RPD) were calculated as the linear slopes of the velocity- and power-time curve. Relative and absolute measures of test-retest reliability were analyzed using intraclass correlation coefficients (ICC), standard error of measurement (SEM) and Bland-Altman analyses. Overall, reliability was high for pV, pP, RVD and RPD at 0%, 25% and 50% load (ICC: .85 - .98, SEM: 3% - 10%). A trend for increased reliability at lower loads seemed apparent. The tests at 75% load led to range of motion failure and should be avoided. In addition, results demonstrated that caution is advised when interpreting early phase results (first 50ms). To conclude, our results support the use of the isotonic mode of an isokinetic dynamometer for testing rapid power and velocity characteristics in older adults, which is of high clinical relevance given that these muscle characteristics are emerging as the primary outcomes for preventive and rehabilitative interventions in aging research.

  4. Discrete Velocity Models for Polyatomic Molecules Without Nonphysical Collision Invariants

    Science.gov (United States)

    Bernhoff, Niclas

    2018-05-01

    An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. Unlike for the Boltzmann equation, for DVMs there can appear extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and hence, without spurious ones, is called normal. The construction of such normal DVMs has been studied a lot in the literature for single species, but also for binary mixtures and recently extensively for multicomponent mixtures. In this paper, we address ways of constructing normal DVMs for polyatomic molecules (here represented by that each molecule has an internal energy, to account for non-translational energies, which can change during collisions), under the assumption that the set of allowed internal energies are finite. We present general algorithms for constructing such models, but we also give concrete examples of such constructions. This approach can also be combined with similar constructions of multicomponent mixtures to obtain multicomponent mixtures with polyatomic molecules, which is also briefly outlined. Then also, chemical reactions can be added.

  5. Results of verification and investigation of wind velocity field forecast. Verification of wind velocity field forecast model

    International Nuclear Information System (INIS)

    Ogawa, Takeshi; Kayano, Mitsunaga; Kikuchi, Hideo; Abe, Takeo; Saga, Kyoji

    1995-01-01

    In Environmental Radioactivity Research Institute, the verification and investigation of the wind velocity field forecast model 'EXPRESS-1' have been carried out since 1991. In fiscal year 1994, as the general analysis, the validity of weather observation data, the local features of wind field, and the validity of the positions of monitoring stations were investigated. The EXPRESS which adopted 500 m mesh so far was improved to 250 m mesh, and the heightening of forecast accuracy was examined, and the comparison with another wind velocity field forecast model 'SPEEDI' was carried out. As the results, there are the places where the correlation with other points of measurement is high and low, and it was found that for the forecast of wind velocity field, by excluding the data of the points with low correlation or installing simplified observation stations to take their data in, the forecast accuracy is improved. The outline of the investigation, the general analysis of weather observation data and the improvements of wind velocity field forecast model and forecast accuracy are reported. (K.I.)

  6. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  7. Experiment research on cognition reliability model of nuclear power plant

    International Nuclear Information System (INIS)

    Zhao Bingquan; Fang Xiang

    1999-01-01

    The objective of the paper is to improve the reliability of operation on real nuclear power plant of operators through the simulation research to the cognition reliability of nuclear power plant operators. The research method of the paper is to make use of simulator of nuclear power plant as research platform, to take present international research model of reliability of human cognition based on three-parameter Weibull distribution for reference, to develop and get the research model of Chinese nuclear power plant operators based on two-parameter Weibull distribution. By making use of two-parameter Weibull distribution research model of cognition reliability, the experiments about the cognition reliability of nuclear power plant operators have been done. Compared with the results of other countries such USA and Hungary, the same results can be obtained, which can do good to the safety operation of nuclear power plant

  8. Comparison of CME radial velocities from a flux rope model and an ice cream cone model

    Science.gov (United States)

    Kim, T.; Moon, Y.; Na, H.

    2011-12-01

    Coronal Mass Ejections (CMEs) on the Sun are the largest energy release process in the solar system and act as the primary driver of geomagnetic storms and other space weather phenomena on the Earth. So it is very important to infer their directions, velocities and three-dimensional structures. In this study, we choose two different models to infer radial velocities of halo CMEs since 2008 : (1) an ice cream cone model by Xue et al (2005) using SOHO/LASCO data, (2) a flux rope model by Thernisien et al. (2009) using the STEREO/SECCHI data. In addition, we use another flux rope model in which the separation angle of flux rope is zero, which is morphologically similar to the ice cream cone model. The comparison shows that the CME radial velocities from among each model have very good correlations (R>0.9). We will extending this comparison to other partial CMEs observed by STEREO and SOHO.

  9. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  10. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  11. Reliability modeling of Clinch River breeder reactor electrical shutdown systems

    International Nuclear Information System (INIS)

    Schatz, R.A.; Duetsch, K.L.

    1974-01-01

    The initial simulation of the probabilistic properties of the Clinch River Breeder Reactor Plant (CRBRP) electrical shutdown systems is described. A model of the reliability (and availability) of the systems is presented utilizing Success State and continuous-time, discrete state Markov modeling techniques as significant elements of an overall reliability assessment process capable of demonstrating the achievement of program goals. This model is examined for its sensitivity to safe/unsafe failure rates, sybsystem redundant configurations, test and repair intervals, monitoring by reactor operators; and the control exercised over system reliability by design modifications and the selection of system operating characteristics. (U.S.)

  12. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  13. Numerical modeling of probe velocity effects for electromagnetic NDE methods

    Science.gov (United States)

    Shin, Y. K.; Lord, W.

    The present discussion of magnetic flux (MLF) leakage inspection introduces the behavior of motion-induced currents. The results obtained indicate that velocity effects exist at even low probe speeds for magnetic materials, compelling the inclusion of velocity effects in MLF testing of oil pipelines, where the excitation level and pig speed are much higher than those used in the present work. Probe velocity effect studies should influence probe design, defining suitable probe speed limits and establishing training guidelines for defect-characterization schemes.

  14. Models for Battery Reliability and Lifetime

    Energy Technology Data Exchange (ETDEWEB)

    Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G. H.; Neubauer, J.; Pesaran, A.

    2014-03-01

    Models describing battery degradation physics are needed to more accurately understand how battery usage and next-generation battery designs can be optimized for performance and lifetime. Such lifetime models may also reduce the cost of battery aging experiments and shorten the time required to validate battery lifetime. Models for chemical degradation and mechanical stress are reviewed. Experimental analysis of aging data from a commercial iron-phosphate lithium-ion (Li-ion) cell elucidates the relative importance of several mechanical stress-induced degradation mechanisms.

  15. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  16. Velocity measurement of model vertical axis wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, D.A.; McWilliam, M. [Waterloo Univ., ON (Canada). Dept. of Mechanical Engineering

    2006-07-01

    An increasingly popular solution to future energy demand is wind energy. Wind turbine designs can be grouped according to their axis of rotation, either horizontal or vertical. Horizontal axis wind turbines have higher power output in a good wind regime than vertical axis turbines and are used in most commercial class designs. Vertical axis Savonius-based wind turbine designs are still widely used in some applications because of their simplistic design and low wind speed performance. There are many design variables that must be considered in order to optimize the power output in a given wind regime in a typical wind turbine design. Using particle image velocimetry, a study of the air flow around five different model vertical axis wind turbines was conducted in a closed loop wind tunnel. A standard Savonius design with two semi-circular blades overlapping, and two variations of this design, a deep blade and a shallow blade design were among the turbine models included in this study. It also evaluated alternate designs that attempt to increase the performance of the standard design by allowing compound blade curvature. Measurements were collected at a constant phase angle and also at random rotor orientations. It was found that evaluation of the flow patterns and measured velocities revealed consistent and stable flow patterns at any given phase angle. Large scale flow structures are evident in all designs such as vortices shed from blade surfaces. An important performance parameter was considered to be the ability of the flow to remain attached to the forward blade and redirect and reorient the flow to the following blade. 6 refs., 18 figs.

  17. MODELING HUMAN RELIABILITY ANALYSIS USING MIDAS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Donald D. Dudenhoeffer; Bruce P. Hallbert; Brian F. Gore

    2006-05-01

    This paper summarizes an emerging collaboration between Idaho National Laboratory and NASA Ames Research Center regarding the utilization of high-fidelity MIDAS simulations for modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error with novel control room equipment and configurations, (ii) the investigative determination of risk significance in recreating past event scenarios involving control room operating crews, and (iii) the certification of novel staffing levels in control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of risk in next generation control rooms.

  18. Validity and reliability of a novel iPhone app for the measurement of barbell velocity and 1RM on the bench-press exercise

    OpenAIRE

    Balsalobre Fernández, Carlos; Marchante Domingo, David; Muñoz López, Mario; Jiménez Sáiz, Sergio Lorenzo

    2018-01-01

    The purpose of this study was to analyse the validity and reliability of a novel iPhone app (named: PowerLift) for the measurement of mean velocity on the bench-press exercise. Additionally, the accuracy of the estimation of the 1-Repetition maximum (1RM) using the load-velocity relationship was tested. To do this, 10 powerlifters (Mean (SD): age = 26.5 ± 6.5 years; bench press 1RM · kg-1 = 1.34 ± 0.25) completed an incremental test on the bench-press exercise with 5 different loads (75-100% ...

  19. Plant and control system reliability and risk model

    International Nuclear Information System (INIS)

    Niemelae, I.M.

    1986-01-01

    A new reliability modelling technique for control systems and plants is demonstrated. It is based on modified boolean algebra and it has been automated into an efficient computer code called RELVEC. The code is useful for getting an overall view of the reliability parameters or for an in-depth reliability analysis, which is essential in risk analysis, where the model must be capable of answering to specific questions like: 'What is the probability of this temperature limiter to provide a false alarm', or 'what is the probability of air pressure in this subsystem to drop below lower limit'. (orig./DG)

  20. An extended continuum model considering optimal velocity change with memory and numerical tests

    Science.gov (United States)

    Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng

    2018-01-01

    In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.

  1. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  2. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  3. Software reliability growth model for safety systems of nuclear reactor

    International Nuclear Information System (INIS)

    Thirugnana Murthy, D.; Murali, N.; Sridevi, T.; Satya Murty, S.A.V.; Velusamy, K.

    2014-01-01

    The demand for complex software systems has increased more rapidly than the ability to design, implement, test, and maintain them, and the reliability of software systems has become a major concern for our, modern society.Software failures have impaired several high visibility programs in space, telecommunications, defense and health industries. Besides the costs involved, it setback the projects. The ways of quantifying it and using it for improvement and control of the software development and maintenance process. This paper discusses need for systematic approaches for measuring and assuring software reliability which is a major share of project development resources. It covers the reliability models with the concern on 'Reliability Growth'. It includes data collection on reliability, statistical estimation and prediction, metrics and attributes of product architecture, design, software development, and the operational environment. Besides its use for operational decisions like deployment, it includes guiding software architecture, development, testing and verification and validation. (author)

  4. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  5. A Survey of Software Reliability Modeling and Estimation

    Science.gov (United States)

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  6. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  7. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  8. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  9. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  10. Shallow velocity model in the area of Pozzo Pitarrone, Mt. Etna, from single station, array methods and borehole data

    Directory of Open Access Journals (Sweden)

    Luciano Zuccarello

    2016-09-01

    Full Text Available Seismic noise recorded by a temporary array installed around Pozzo Pitarrone, NE flank of Mt. Etna, have been analysed with several techniques. Single station HVSR method and SPAC array method have been applied to stationary seismic noise to investigate the local shallow structure. The inversion of dispersion curves produced a shear wave velocity model of the area reliable down to depth of about 130 m. A comparison of such model with the stratigraphic information available for the investigated area shows a good qualitative agreement. Taking advantage of a borehole station installed at 130 m depth, we could estimate also the P-wave velocity by comparing the borehole recordings of local earthquakes with the same event recorded at surface. Further insight on the P-wave velocity in the upper 130 m layer comes from the surface reflected wave observable in some cases at the borehole station. From this analysis we obtained an average P-wave velocity of about 1.2 km/s, compatible with the shear wave velocity found from the analysis of seismic noise.

  11. Modeling cognition dynamics and its application to human reliability analysis

    International Nuclear Information System (INIS)

    Mosleh, A.; Smidts, C.; Shen, S.H.

    1996-01-01

    For the past two decades, a number of approaches have been proposed for the identification and estimation of the likelihood of human errors, particularly for use in the risk and reliability studies of nuclear power plants. Despite the wide-spread use of the most popular among these methods, their fundamental weaknesses are widely recognized, and the treatment of human reliability has been considered as one of the soft spots of risk studies of large technological systems. To alleviate the situation, new efforts have focused on the development of human reliability models based on a more fundamental understanding of operator response and its cognitive aspects

  12. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  13. Modeling of system reliability Petri nets with aging tokens

    International Nuclear Information System (INIS)

    Volovoi, V.

    2004-01-01

    The paper addresses the dynamic modeling of degrading and repairable complex systems. Emphasis is placed on the convenience of modeling for the end user, with special attention being paid to the modeling part of a problem, which is considered to be decoupled from the choice of solution algorithms. Depending on the nature of the problem, these solution algorithms can include discrete event simulation or numerical solution of the differential equations that govern underlying stochastic processes. Such modularity allows a focus on the needs of system reliability modeling and tailoring of the modeling formalism accordingly. To this end, several salient features are chosen from the multitude of existing extensions of Petri nets, and a new concept of aging tokens (tokens with memory) is introduced. The resulting framework provides for flexible and transparent graphical modeling with excellent representational power that is particularly suited for system reliability modeling with non-exponentially distributed firing times. The new framework is compared with existing Petri-net approaches and other system reliability modeling techniques such as reliability block diagrams and fault trees. The relative differences are emphasized and illustrated with several examples, including modeling of load sharing, imperfect repair of pooled items, multiphase missions, and damage-tolerant maintenance. Finally, a simple implementation of the framework using discrete event simulation is described

  14. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  15. Learning reliable manipulation strategies without initial physical models

    Science.gov (United States)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  16. Validity and reliability of a novel iPhone app for the measurement of barbell velocity and 1RM on the bench-press exercise.

    Science.gov (United States)

    Balsalobre-Fernández, Carlos; Marchante, David; Muñoz-López, Mario; Jiménez, Sergio L

    2018-01-01

    The purpose of this study was to analyse the validity and reliability of a novel iPhone app (named: PowerLift) for the measurement of mean velocity on the bench-press exercise. Additionally, the accuracy of the estimation of the 1-Repetition maximum (1RM) using the load-velocity relationship was tested. To do this, 10 powerlifters (Mean (SD): age = 26.5 ± 6.5 years; bench press 1RM · kg -1  = 1.34 ± 0.25) completed an incremental test on the bench-press exercise with 5 different loads (75-100% 1RM), while the mean velocity of the barbell was registered using a linear transducer (LT) and Powerlift. Results showed a very high correlation between the LT and the app (r = 0.94, SEE = 0.028 m · s -1 ) for the measurement of mean velocity. Bland-Altman plots (R 2  = 0.011) and intraclass correlation coefficient (ICC = 0.965) revealed a very high agreement between both devices. A systematic bias by which the app registered slightly higher values than the LT (P velocity in the bench-press exercise.

  17. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  18. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  19. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  20. Models for reliability and management of NDT data

    International Nuclear Information System (INIS)

    Simola, K.

    1997-01-01

    In this paper the reliability of NDT measurements was approached from three directions. We have modelled the flaw sizing performance, the probability of flaw detection, and developed models to update the knowledge of true flaw size based on sequential measurement results and flaw sizing reliability model. In discussed models the measured flaw characteristics (depth, length) are assumed to be simple functions of the true characteristics and random noise corresponding to measurement errors, and the models are based on logarithmic transforms. Models for Bayesian updating of the flaw size distributions were developed. Using these models, it is possible to take into account the prior information of the flaw size and combine it with the measured results. A Bayesian approach could contribute e. g. to the definition of an appropriate combination of practical assessments and technical justifications in NDT system qualifications, as expressed by the European regulatory bodies

  1. Multijam Solutions in Traffic Models with Velocity-Dependent Driver Strategies

    DEFF Research Database (Denmark)

    Carter, Paul; Christiansen, Peter Leth; Gaididei, Yuri B.

    2014-01-01

    The optimal-velocity follow-the-leader model is augmented with an equation that allows each driver to adjust their target headway according to the velocity difference between the driver and the car in front. In this more detailed model, which is investigated on a ring, stable and unstable multipu...

  2. Transparent reliability model for fault-tolerant safety systems

    International Nuclear Information System (INIS)

    Bodsberg, Lars; Hokstad, Per

    1997-01-01

    A reliability model is presented which may serve as a tool for identification of cost-effective configurations and operating philosophies of computer-based process safety systems. The main merit of the model is the explicit relationship in the mathematical formulas between failure cause and the means used to improve system reliability such as self-test, redundancy, preventive maintenance and corrective maintenance. A component failure taxonomy has been developed which allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. Furthermore, the taxonomy distinguishes between failures due to excessive environmental stresses and failures initiated by humans during engineering and operation. Attention has been given to develop a transparent model which provides predictions which are in good agreement with observed system performance, and which is applicable for non-experts in the field of reliability

  3. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes

    Science.gov (United States)

    Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.

    2013-01-01

    Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.

  4. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  5. Fuse Modeling for Reliability Study of Power Electronic Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Iannuzzo, Francesco; Blaabjerg, Frede

    2017-01-01

    This paper describes a comprehensive modeling approach on reliability of fuses used in power electronic circuits. When fuses are subjected to current pulses, cyclic temperature stress is introduced to the fuse element and will wear out the component. Furthermore, the fuse may be used in a large......, and rated voltage/current are opposed to shift in time to effect early breaking during the normal operation of the circuit. Therefore, in such cases, a reliable protection required for the other circuit components will not be achieved. The thermo-mechanical models, fatigue analysis and thermo...

  6. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  7. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  8. A model for assessing human cognitive reliability in PRA studies

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Spurgin, A.J.; Lukic, Y.

    1985-01-01

    This paper summarizes the status of a research project sponsored by EPRI as part of the Probabilistic Risk Assessment (PRA) technology improvement program and conducted by NUS Corporation to develop a model of Human Cognitive Reliability (HCR). The model was synthesized from features identified in a review of existing models. The model development was based on the hypothesis that the key factors affecting crew response times are separable. The inputs to the model consist of key parameters the values of which can be determined by PRA analysts for each accident situation being assessed. The output is a set of curves which represent the probability of control room crew non-response as a function of time for different conditions affecting their performance. The non-response probability is then a contributor to the overall non-success of operating crews to achieve a functional objective identified in the PRA study. Simulator data and some small scale tests were utilized to illustrate the calibration of interim HCR model coefficients for different types of cognitive processing since the data were sparse. The model can potentially help PRA analysts make human reliability assessments more explicit. The model incorporates concepts from psychological models of human cognitive behavior, information from current collections of human reliability data sources and crew response time data from simulator training exercises

  9. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  10. The reliability of the Adelaide in-shoe foot model.

    Science.gov (United States)

    Bishop, Chris; Hillier, Susan; Thewlis, Dominic

    2017-07-01

    Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Modeling of humidity-related reliability in enclosures with electronics

    DEFF Research Database (Denmark)

    Hygum, Morten Arnfeldt; Popok, Vladimir

    2015-01-01

    Reliability of electronics that operate outdoor is strongly affected by environmental factors such as temperature and humidity. Fluctuations of these parameters can lead to water condensation inside enclosures. Therefore, modelling of humidity distribution in a container with air and freely exposed...

  12. Models of Information Security Highly Reliable Computing Systems

    Directory of Open Access Journals (Sweden)

    Vsevolod Ozirisovich Chukanov

    2016-03-01

    Full Text Available Methods of the combined reservation are considered. The models of reliability of systems considering parameters of restoration and prevention of blocks of system are described. Ratios for average quantity prevention and an availability quotient of blocks of system are given.

  13. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  14. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  15. Modular reliability modeling of the TJNAF personnel safety system

    International Nuclear Information System (INIS)

    Cinnamon, J.; Mahoney, K.

    1997-01-01

    A reliability model for the Thomas Jefferson National Accelerator Facility (formerly CEBAF) personnel safety system has been developed. The model, which was implemented using an Excel spreadsheet, allows simulation of all or parts of the system. Modularity os the model's implementation allows rapid open-quotes what if open-quotes case studies to simulate change in safety system parameters such as redundancy, diversity, and failure rates. Particular emphasis is given to the prediction of failure modes which would result in the failure of both of the redundant safety interlock systems. In addition to the calculation of the predicted reliability of the safety system, the model also calculates availability of the same system. Such calculations allow the user to make tradeoff studies between reliability and availability, and to target resources to improving those parts of the system which would most benefit from redesign or upgrade. The model includes calculated, manufacturer's data, and Jefferson Lab field data. This paper describes the model, methods used, and comparison of calculated to actual data for the Jefferson Lab personnel safety system. Examples are given to illustrate the model's utility and ease of use

  16. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  17. Reliability modeling and analysis of smart power systems

    CERN Document Server

    Karki, Rajesh; Verma, Ajit Kumar

    2014-01-01

    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  18. Nonaligned shocks for discrete velocity models of the Boltzmann equation

    Directory of Open Access Journals (Sweden)

    J. M. Greenberg

    1991-05-01

    Full Text Available At the conclusion of I. Bonzani's presentation on the existence of structured shock solutions to the six-velocity, planar, discrete Boltzmann equation (with binary and triple collisions, Greenberg asked whether such solutions were possible in directions e(α=(cosα ,sinα when α was not one of the particle flow directions. This question generated a spirited discussion but the question was still open at the conclusion of the conference. In this note the author will provide a partial resolution to the question raised above. Using formal perturbation arguments he will produce approximate solutions to the equation considered by Bonzani which represent traveling waves propagating in any direction e(α=(cosα ,sinα.

  19. Modeling of velocity field for vacuum induction melting process

    Institute of Scientific and Technical Information of China (English)

    CHEN Bo; JIANG Zhi-guo; LIU Kui; LI Yi-yi

    2005-01-01

    The numerical simulation for the recirculating flow of melting of an electromagnetically stirred alloy in a cylindrical induction furnace crucible was presented. Inductive currents and electromagnetic body forces in the alloy under three different solenoid frequencies and three different melting powers were calculated, and then the forces were adopted in the fluid flow equations to simulate the flow of the alloy and the behavior of the free surface. The relationship between the height of the electromagnetic stirring meniscus, melting power, and solenoid frequency was derived based on the law of mass conservation. The results show that the inductive currents and the electromagnetic forces vary with the frequency, melting power, and the physical properties of metal. The velocity and the height of the meniscus increase with the increase of the melting power and the decrease of the solenoid frequency.

  20. Velocity potential formulations of highly accurate Boussinesq-type models

    DEFF Research Database (Denmark)

    Bingham, Harry B.; Madsen, Per A.; Fuhrman, David R.

    2009-01-01

    , B., 2006. A Boussinesq-type method for fully nonlinear waves interacting with a rapidly varying bathymetry. Coast. Eng. 53, 487-504); Jamois et al. (Jamois, E., Fuhrman, D.R., Bingham, H.B., Molin, B., 2006. Wave-structure interactions and nonlinear wave processes on the weather side of reflective...... with the kinematic bottom boundary condition. The true behaviour of the velocity potential formulation with respect to linear shoaling is given for the first time, correcting errors made by Jamois et al. (Jamois, E., Fuhrman, D.R., Bingham, H.B., Molin, B., 2006. Wave-structure interactions and nonlinear wave...... processes on the weather side of reflective structures. Coast. Eng. 53, 929-945). An exact infinite series solution for the potential is obtained via a Taylor expansion about an arbitrary vertical position z=(z) over cap. For practical implementation however, the solution is expanded based on a slow...

  1. Velocity Deficits in the Wake of Model Lemon Shark Dorsal Fins Measured with Particle Image Velocimetry

    Science.gov (United States)

    Terry, K. N.; Turner, V.; Hackett, E.

    2017-12-01

    Aquatic animals' morphology provides inspiration for human technological developments, as their bodies have evolved and become adapted for efficient swimming. Lemon sharks exhibit a uniquely large second dorsal fin that is nearly the same size as the first fin, the hydrodynamic role of which is unknown. This experimental study looks at the drag forces on a scale model of the Lemon shark's unique two-fin configuration in comparison to drag forces on a more typical one-fin configuration. The experiments were performed in a recirculating water flume, where the wakes behind the scale models are measured using particle image velocimetry. The experiments are performed at three different flow speeds for both fin configurations. The measured instantaneous 2D distributions of the streamwise and wall-normal velocity components are ensemble averaged to generate streamwise velocity vertical profiles. In addition, velocity deficit profiles are computed from the difference between these mean streamwise velocity profiles and the free stream velocity, which is computed based on measured flow rates during the experiments. Results show that the mean velocities behind the fin and near the fin tip are smallest and increase as the streamwise distance from the fin tip increases. The magnitude of velocity deficits increases with increasing flow speed for both fin configurations, but at all flow speeds, the two-fin configurations generate larger velocity deficits than the one-fin configurations. Because the velocity deficit is directly proportional to the drag force, these results suggest that the two-fin configuration produces more drag.

  2. A general graphical user interface for automatic reliability modeling

    Science.gov (United States)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  3. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  4. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  5. Bring Your Own Device - Providing Reliable Model of Data Access

    Directory of Open Access Journals (Sweden)

    Stąpór Paweł

    2016-10-01

    Full Text Available The article presents a model of Bring Your Own Device (BYOD as a model network, which provides the user reliable access to network resources. BYOD is a model dynamically developing, which can be applied in many areas. Research network has been launched in order to carry out the test, in which as a service of BYOD model Work Folders service was used. This service allows the user to synchronize files between the device and the server. An access to the network is completed through the wireless communication by the 802.11n standard. Obtained results are shown and analyzed in this article.

  6. Evaluation of the Most Reliable Procedure of Determining Jump Height During the Loaded Countermovement Jump Exercise: Take-Off Velocity vs. Flight Time.

    Science.gov (United States)

    Pérez-Castilla, Alejandro; García-Ramos, Amador

    2018-07-01

    Pérez-Castilla, A and García-Ramos, A. Evaluation of the most reliable procedure of determining jump height during the loaded countermovement jump exercise: Take-off velocity vs. flight time. J Strength Cond Res 32(7): 2025-2030, 2018-This study aimed to compare the reliability of jump height between the 2 standard procedures of analyzing force-time data (take-off velocity [TOV] and flight time [FT]) during the loaded countermovement (CMJ) exercise performed with a free-weight barbell and in a Smith machine. The jump height of 17 men (age: 22.2 ± 2.2 years, body mass: 75.2 ± 7.1 kg, and height: 177.0 ± 6.0 cm) was tested in 4 sessions (twice for each CMJ type) against external loads of 17, 30, 45, 60, and 75 kg. Jump height reliability was comparable between the TOV (coefficient of variation [CV]: 6.42 ± 2.41%) and FT (CV: 6.53 ± 2.17%) during the free-weight CMJ, but it was higher for the FT when the CMJ was performed in a Smith machine (CV: 11.34 ± 3.73% for TOV and 5.95 ± 1.12% for FT). Bland-Altman plots revealed trivial differences (≤0.27 cm) and no heteroscedasticity of the errors (R ≤ 0.09) for the jump height obtained by the TOV and FT procedures, whereas the random error between both procedures was higher for the CMJ performed in the Smith machine (2.02 cm) compared with the free-weight barbell (1.26 cm). Based on these results, we recommend the FT procedure to determine jump height during the loaded CMJ performed in a Smith machine, whereas the TOV and FT procedures provide similar reliability during the free-weight CMJ.

  7. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  8. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  9. A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.

    Science.gov (United States)

    Blake, James R; Easson, William J; Hoskins, Peter R

    2009-09-01

    A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.

  10. Evaluation of a Model for Predicting the Tidal Velocity in Fjord Entrances

    Energy Technology Data Exchange (ETDEWEB)

    Lalander, Emilia [The Swedish Centre for Renewable Electric Energy Conversion, Division of Electricity, Uppsala Univ. (Sweden); Thomassen, Paul [Team Ashes, Trondheim (Norway); Leijon, Mats [The Swedish Centre for Renewable Electric Energy Conversion, Division of Electricity, Uppsala Univ. (Sweden)

    2013-04-15

    Sufficiently accurate and low-cost estimation of tidal velocities is of importance when evaluating a potential site for a tidal energy farm. Here we suggest and evaluate a model to calculate the tidal velocity in fjord entrances. The model is compared with tidal velocities from Acoustic Doppler Current Profiler (ADCP) measurements in the tidal channel Skarpsundet in Norway. The calculated velocity value from the model corresponded well with the measured cross-sectional average velocity, but was shown to underestimate the velocity in the centre of the channel. The effect of this was quantified by calculating the kinetic energy of the flow for a 14-day period. A numerical simulation using TELEMAC-2D was performed and validated with ADCP measurements. Velocity data from the simulation was used as input for calculating the kinetic energy at various locations in the channel. It was concluded that the model presented here is not accurate enough for assessing the tidal energy resource. However, the simplicity of the model was considered promising in the use of finding sites where further analyses can be made.

  11. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  12. Decay constants of heavy mesons in the relativistic potential model with velocity dependent corrections

    International Nuclear Information System (INIS)

    Avaliani, I.S.; Sisakyan, A.N.; Slepchenko, L.A.

    1992-01-01

    In the relativistic model with the velocity dependent potential the masses and leptonic decay constants of heavy pseudoscalar and vector mesons are computed. The possibility of using this potential is discussed. 11 refs.; 4 tabs

  13. Testing the reliability of ice-cream cone model

    Science.gov (United States)

    Pan, Zonghao; Shen, Chenglong; Wang, Chuanbing; Liu, Kai; Xue, Xianghui; Wang, Yuming; Wang, Shui

    2015-04-01

    Coronal Mass Ejections (CME)'s properties are important to not only the physical scene itself but space-weather prediction. Several models (such as cone model, GCS model, and so on) have been raised to get rid of the projection effects within the properties observed by spacecraft. According to SOHO/ LASCO observations, we obtain the 'real' 3D parameters of all the FFHCMEs (front-side full halo Coronal Mass Ejections) within the 24th solar cycle till July 2012, by the ice-cream cone model. Considering that the method to obtain 3D parameters from the CME observations by multi-satellite and multi-angle has higher accuracy, we use the GCS model to obtain the real propagation parameters of these CMEs in 3D space and compare the results with which by ice-cream cone model. Then we could discuss the reliability of the ice-cream cone model.

  14. Creation and Reliability Analysis of Vehicle Dynamic Weighing Model

    Directory of Open Access Journals (Sweden)

    Zhi-Ling XU

    2014-08-01

    Full Text Available In this paper, it is modeled by using ADAMS to portable axle load meter of dynamic weighing system, controlling a single variable simulation weighing process, getting the simulation weighing data under the different speed and weight; simultaneously using portable weighing system with the same parameters to achieve the actual measurement, comparative analysis the simulation results under the same conditions, at 30 km/h or less, the simulation value and the measured value do not differ by more than 5 %, it is not only to verify the reliability of dynamic weighing model, but also to create possible for improving algorithm study efficiency by using dynamic weighing model simulation.

  15. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  16. Imperfect Preventive Maintenance Model Study Based On Reliability Limitation

    Directory of Open Access Journals (Sweden)

    Zhou Qian

    2016-01-01

    Full Text Available Effective maintenance is crucial for equipment performance in industry. Imperfect maintenance conform to actual failure process. Taking the dynamic preventive maintenance cost into account, the preventive maintenance model was constructed by using age reduction factor. The model regards the minimization of repair cost rate as final target. It use allowed smallest reliability as the replacement condition. Equipment life was assumed to follow two parameters Weibull distribution since it was one of the most commonly adopted distributions to fit cumulative failure problems. Eventually the example verifies the rationality and benefits of the model.

  17. Modelling and estimating degradation processes with application in structural reliability

    International Nuclear Information System (INIS)

    Chiquet, J.

    2007-06-01

    The characteristic level of degradation of a given structure is modeled through a stochastic process called the degradation process. The random evolution of the degradation process is governed by a differential system with Markovian environment. We put the associated reliability framework by considering the failure of the structure once the degradation process reaches a critical threshold. A closed form solution of the reliability function is obtained thanks to Markov renewal theory. Then, we build an estimation methodology for the parameters of the stochastic processes involved. The estimation methods and the theoretical results, as well as the associated numerical algorithms, are validated on simulated data sets. Our method is applied to the modelling of a real degradation mechanism, known as crack growth, for which an experimental data set is considered. (authors)

  18. An empirical velocity scale relation for modelling a design of large mesh pelagic trawl

    NARCIS (Netherlands)

    Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.

    1996-01-01

    Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is

  19. Modeling non-Fickian dispersion by use of the velocity PDF on the pore scale

    Science.gov (United States)

    Kooshapur, Sheema; Manhart, Michael

    2015-04-01

    For obtaining a description of reactive flows in porous media, apart from the geometrical complications of resolving the velocities and scalar values, one has to deal with the additional reactive term in the transport equation. An accurate description of the interface of the reacting fluids - which is strongly influenced by dispersion- is essential for resolving this term. In REV-based simulations the reactive term needs to be modeled taking sub-REV fluctuations and possibly non-Fickian dispersion into account. Non-Fickian dispersion has been observed in strongly heterogeneous domains and in early phases of transport. A fully resolved solution of the Navier-Stokes and transport equations which yields a detailed description of the flow properties, dispersion, interfaces of fluids, etc. however, is not practical for domains containing more than a few thousand grains, due to the huge computational effort required. Through Probability Density Function (PDF) based methods, the velocity distribution in the pore space can facilitate the understanding and modelling of non-Fickian dispersion [1,2]. Our aim is to model the transition between non-Fickian and Fickian dispersion in a random sphere pack within the framework of a PDF based transport model proposed by Meyer and Tchelepi [1,3]. They proposed a stochastic transport model where velocity components of tracer particles are represented by a continuous Markovian stochastic process. In addition to [3], we consider the effects of pore scale diffusion and formulate a different stochastic equation for the increments in velocity space from first principles. To assess the terms in this equation, we performed Direct Numerical Simulations (DNS) for solving the Navier-Stokes equation on a random sphere pack. We extracted the PDFs and statistical moments (up to the 4th moment) of the stream-wise velocity, u, and first and second order velocity derivatives both independent and conditioned on velocity. By using this data and

  20. Axial flow velocity patterns in a normal human pulmonary artery model: pulsatile in vitro studies.

    Science.gov (United States)

    Sung, H W; Yoganathan, A P

    1990-01-01

    It has been clinically observed that the flow velocity patterns in the pulmonary artery are directly modified by disease. The present study addresses the hypothesis that altered velocity patterns relate to the severity of various diseases in the pulmonary artery. This paper lays a foundation for that analysis by providing a detailed description of flow velocity patterns in the normal pulmonary artery, using flow visualization and laser Doppler anemometry techniques. The studies were conducted in an in vitro rigid model in a right heart pulse duplicator system. In the main pulmonary artery, a broad central flow field was observed throughout systole. The maximum axial velocity (150 cm s-1) was measured at peak systole. In the left pulmonary artery, the axial velocities were approximately evenly distributed in the perpendicular plane. However, in the bifurcation plane, they were slightly skewed toward the inner wall at peak systole and during the deceleration phase. In the right pulmonary artery, the axial velocity in the perpendicular plane had a very marked M-shaped profile at peak systole and during the deceleration phase, due to a pair of strong secondary flows. In the bifurcation plane, higher axial velocities were observed along the inner wall, while lower axial velocities were observed along the outer wall and in the center. Overall, relatively low levels of turbulence were observed in all the branches during systole. The maximum turbulence intensity measured was at the boundary of the broad central flow field in the main pulmonary artery at peak systole.

  1. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  2. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  3. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  4. Evaluating the reliability of predictions made using environmental transfer models

    International Nuclear Information System (INIS)

    1989-01-01

    The development and application of mathematical models for predicting the consequences of releases of radionuclides into the environment from normal operations in the nuclear fuel cycle and in hypothetical accident conditions has increased dramatically in the last two decades. This Safety Practice publication has been prepared to provide guidance on the available methods for evaluating the reliability of environmental transfer model predictions. It provides a practical introduction of the subject and a particular emphasis has been given to worked examples in the text. It is intended to supplement existing IAEA publications on environmental assessment methodology. 60 refs, 17 figs, 12 tabs

  5. Mathematical Modeling for Energy Dissipation Behavior of Velocity ...

    African Journals Online (AJOL)

    The developed oil-pressure damper is installed with an additional Relief Valve parallel to the Throttle Valve. This is intended to obtain an adaptive control by changing the damping coefficient of this damper using changeable orifice size. In order to simulate its actual energy-dissipating behavior, a serial friction model and a ...

  6. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  7. Power Electronic Packaging Design, Assembly Process, Reliability and Modeling

    CERN Document Server

    Liu, Yong

    2012-01-01

    Power Electronic Packaging presents an in-depth overview of power electronic packaging design, assembly,reliability and modeling. Since there is a drastic difference between IC fabrication and power electronic packaging, the book systematically introduces typical power electronic packaging design, assembly, reliability and failure analysis and material selection so readers can clearly understand each task's unique characteristics. Power electronic packaging is one of the fastest growing segments in the power electronic industry, due to the rapid growth of power integrated circuit (IC) fabrication, especially for applications like portable, consumer, home, computing and automotive electronics. This book also covers how advances in both semiconductor content and power advanced package design have helped cause advances in power device capability in recent years. The author extrapolates the most recent trends in the book's areas of focus to highlight where further improvement in materials and techniques can d...

  8. Quantum Gravity and Maximum Attainable Velocities in the Standard Model

    International Nuclear Information System (INIS)

    Alfaro, Jorge

    2007-01-01

    A main difficulty in the quantization of the gravitational field is the lack of experiments that discriminate among the theories proposed to quantize gravity. Recently we showed that the Standard Model(SM) itself contains tiny Lorentz invariance violation(LIV) terms coming from QG. All terms depend on one arbitrary parameter α that set the scale of QG effects. In this talk we review the LIV for mesons nucleons and leptons and apply it to study several effects, including the GZK anomaly

  9. Mathematical model for logarithmic scaling of velocity fluctuations in wall turbulence.

    Science.gov (United States)

    Mouri, Hideaki

    2015-12-01

    For wall turbulence, moments of velocity fluctuations are known to be logarithmic functions of the height from the wall. This logarithmic scaling is due to the existence of a characteristic velocity and to the nonexistence of any characteristic height in the range of the scaling. By using the mathematics of random variables, we obtain its necessary and sufficient conditions. They are compared with characteristics of a phenomenological model of eddies attached to the wall and also with those of the logarithmic scaling of the mean velocity.

  10. Measurement of velocity deficit at the downstream of a 1:10 axial hydrokinetic turbine model

    Energy Technology Data Exchange (ETDEWEB)

    Gunawan, Budi [ORNL; Neary, Vincent S [ORNL; Hill, Craig [St. Anthony Falls Laboratory, 2 Third Avenue SE, Minneapolis, MN 55414; Chamorro, Leonardo [St. Anthony Falls Laboratory, 2 Third Avenue SE, Minneapolis, MN 55414

    2012-01-01

    Wake recovery constrains the downstream spacing and density of turbines that can be deployed in turbine farms and limits the amount of energy that can be produced at a hydrokinetic energy site. This study investigates the wake recovery at the downstream of a 1:10 axial flow turbine model using a pulse-to-pulse coherent Acoustic Doppler Profiler (ADP). In addition, turbine inflow and outflow velocities were measured for calculating the thrust on the turbine. The result shows that the depth-averaged longitudinal velocity recovers to 97% of the inflow velocity at 35 turbine diameter (D) downstream of the turbine.

  11. Lane-changing behavior and its effect on energy dissipation using full velocity difference model

    Science.gov (United States)

    Wang, Jian; Ding, Jian-Xun; Shi, Qin; Kühne, Reinhart D.

    2016-07-01

    In real urban traffic, roadways are usually multilane with lane-specific velocity limits. Most previous researches are derived from single-lane car-following theory which in the past years has been extensively investigated and applied. In this paper, we extend the continuous single-lane car-following model (full velocity difference model) to simulate the three-lane-changing behavior on an urban roadway which consists of three lanes. To meet incentive and security requirements, a comprehensive lane-changing rule set is constructed, taking safety distance and velocity difference into consideration and setting lane-specific speed restriction for each lane. We also investigate the effect of lane-changing behavior on distribution of cars, velocity, headway, fundamental diagram of traffic and energy dissipation. Simulation results have demonstrated asymmetric lane-changing “attraction” on changeable lane-specific speed-limited roadway, which leads to dramatically increasing energy dissipation.

  12. A wave propagation model of blood flow in large vessels using an approximate velocity profile function

    NARCIS (Netherlands)

    Bessems, D.; Rutten, M.C.M.; Vosse, van de F.N.

    2007-01-01

    Lumped-parameter models (zero-dimensional) and wave-propagation models (one-dimensional) for pressure and flow in large vessels, as well as fully three-dimensional fluid–structure interaction models for pressure and velocity, can contribute valuably to answering physiological and patho-physiological

  13. Model-based human reliability analysis: prospects and requirements

    International Nuclear Information System (INIS)

    Mosleh, A.; Chang, Y.H.

    2004-01-01

    Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

  14. Mathematical modeling of groundwater contamination with varying velocity field

    Directory of Open Access Journals (Sweden)

    Das Pintu

    2017-06-01

    Full Text Available In this study, analytical models for predicting groundwater contamination in isotropic and homogeneous porous formations are derived. The impact of dispersion and diffusion coefficients is included in the solution of the advection-dispersion equation (ADE, subjected to transient (time-dependent boundary conditions at the origin. A retardation factor and zero-order production terms are included in the ADE. Analytical solutions are obtained using the Laplace Integral Transform Technique (LITT and the concept of linear isotherm. For illustration, analytical solutions for linearly space- and time-dependent hydrodynamic dispersion coefficients along with molecular diffusion coefficients are presented. Analytical solutions are explored for the Peclet number. Numerical solutions are obtained by explicit finite difference methods and are compared with analytical solutions. Numerical results are analysed for different types of geological porous formations i.e., aquifer and aquitard. The accuracy of results is evaluated by the root mean square error (RMSE.

  15. Stochastic process corrosion growth models for pipeline reliability

    International Nuclear Information System (INIS)

    Bazán, Felipe Alexander Vargas; Beck, André Teófilo

    2013-01-01

    Highlights: •Novel non-linear stochastic process corrosion growth model is proposed. •Corrosion rate modeled as random Poisson pulses. •Time to corrosion initiation and inherent time-variability properly represented. •Continuous corrosion growth histories obtained. •Model is shown to precisely fit actual corrosion data at two time points. -- Abstract: Linear random variable corrosion models are extensively employed in reliability analysis of pipelines. However, linear models grossly neglect well-known characteristics of the corrosion process. Herein, a non-linear model is proposed, where corrosion rate is represented as a Poisson square wave process. The resulting model represents inherent time-variability of corrosion growth, produces continuous growth and leads to mean growth at less-than-one power of time. Different corrosion models are adjusted to the same set of actual corrosion data for two inspections. The proposed non-linear random process corrosion growth model leads to the best fit to the data, while better representing problem physics

  16. Do downscaled general circulation models reliably simulate historical climatic conditions?

    Science.gov (United States)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  17. Using the Weibull distribution reliability, modeling and inference

    CERN Document Server

    McCool, John I

    2012-01-01

    Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution

  18. Understanding software faults and their role in software reliability modeling

    Science.gov (United States)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the

  19. A math model for high velocity sensoring with a focal plane shuttered camera.

    Science.gov (United States)

    Morgan, P.

    1971-01-01

    A new mathematical model is presented which describes the image produced by a focal plane shutter-equipped camera. The model is based upon the well-known collinearity condition equations and incorporates both the translational and rotational motion of the camera during the exposure interval. The first differentials of the model with respect to exposure interval, delta t, yield the general matrix expressions for image velocities which may be simplified to known cases. The exposure interval, delta t, may be replaced under certain circumstances with a function incorporating blind velocity and image position if desired. The model is tested using simulated Lunar Orbiter data and found to be computationally stable as well as providing excellent results, provided that some external information is available on the velocity parameters.

  20. Assessment of effectiveness of geologic isolation systems: geostatistical modeling of pore velocity

    International Nuclear Information System (INIS)

    Devary, J.L.; Doctor, P.G.

    1981-06-01

    A significant part of evaluating a geologic formation as a nuclear waste repository involves the modeling of contaminant transport in the surrounding media in the event the repository is breached. The commonly used contaminant transport models are deterministic. However, the spatial variability of hydrologic field parameters introduces uncertainties into contaminant transport predictions. This paper discusses the application of geostatistical techniques to the modeling of spatially varying hydrologic field parameters required as input to contaminant transport analyses. Kriging estimation techniques were applied to Hanford Reservation field data to calculate hydraulic conductivity and the ground-water potential gradients. These quantities were statistically combined to estimate the groundwater pore velocity and to characterize the pore velocity estimation error. Combining geostatistical modeling techniques with product error propagation techniques results in an effective stochastic characterization of groundwater pore velocity, a hydrologic parameter required for contaminant transport analyses

  1. Developing a Crustal and Upper Mantle Velocity Model for the Brazilian Northeast

    Science.gov (United States)

    Julia, J.; Nascimento, R.

    2013-05-01

    Development of 3D models for the earth's crust and upper mantle is important for accurately predicting travel times for regional phases and to improve seismic event location. The Brazilian Northeast is a tectonically active area within stable South America and displays one of the highest levels of seismicity in Brazil, with earthquake swarms containing events up to mb 5.2. Since 2011, seismic activity is routinely monitored through the Rede Sismográfica do Nordeste (RSisNE), a permanent network supported by the national oil company PETROBRAS and consisting of 15 broadband stations with an average spacing of ~200 km. Accurate event locations are required to correctly characterize and identify seismogenic areas in the region and assess seismic hazard. Yet, no 3D model of crustal thickness and crustal and upper mantle velocity variation exists. The first step in developing such models is to refine crustal thickness and depths to major seismic velocity boundaries in the crust and improve on seismic velocity estimates for the upper mantle and crustal layers. We present recent results in crustal and uppermost mantle structure in NE Brazil that will contribute to the development of a 3D model of velocity variation. Our approach has consisted of: (i) computing receiver functions to obtain point estimates of crustal thickness and Vp/Vs ratio and (ii) jointly inverting receiver functions and surface-wave dispersion velocities from an independent tomography study to obtain S-velocity profiles at each station. This approach has been used at all the broadband stations of the monitoring network plus 15 temporary, short-period stations that reduced the inter-station spacing to ~100 km. We expect our contributions will provide the basis to produce full 3D velocity models for the Brazilian Northeast and help determine accurate locations for seismic events in the region.

  2. Reliable low precision simulations in land surface models

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  3. Modelling Velocity Spectra in the Lower Part of the Planetary Boundary Layer

    DEFF Research Database (Denmark)

    Olesen, H.R.; Larsen, Søren Ejling; Højstrup, Jørgen

    1984-01-01

    of the planetary boundary layer. Knowledge of the variation with stability of the (reduced) frequency f, for the spectral maximum is utilized in this modelling. Stable spectra may be normalized so that they adhere to one curve only, irrespective of stability, and unstable w-spectra may also be normalized to fit...... one curve. The problem of using filtered velocity variances when modelling spectra is discussed. A simplified procedure to provide a first estimate of the filter effect is given. In stable, horizontal velocity spectra, there is often a ‘gap’ at low frequencies. Using dimensional considerations...... and the spectral model previously derived, an expression for the gap frequency is found....

  4. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    Science.gov (United States)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  5. Three-dimensional flow of a nanofluid over a permeable stretching/shrinking surface with velocity slip: A revised model

    Science.gov (United States)

    Jusoh, R.; Nazar, R.; Pop, I.

    2018-03-01

    A reformulation of the three-dimensional flow of a nanofluid by employing Buongiorno's model is presented. A new boundary condition is implemented in this study with the assumption of nanoparticle mass flux at the surface is zero. This condition is practically more realistic since the nanoparticle fraction at the boundary is latently controlled. This study is devoted to investigate the impact of the velocity slip and suction to the flow and heat transfer characteristics of nanofluid. The governing partial differential equations corresponding to the momentum, energy, and concentration are reduced to the ordinary differential equations by utilizing the appropriate transformation. Numerical solutions of the ordinary differential equations are obtained by using the built-in bvp4c function in Matlab. Graphical illustrations displaying the physical influence of the several nanofluid parameters on the flow velocity, temperature, and nanoparticle volume fraction profiles, as well as the skin friction coefficient and the local Nusselt number are provided. The present study discovers the existence of dual solutions at a certain range of parameters. Surprisingly, both of the solutions merge at the stretching sheet indicating that the presence of the velocity slip affects the skin friction coefficients. Stability analysis is carried out to determine the stability and reliability of the solutions. It is found that the first solution is stable while the second solution is not stable.

  6. UCVM: An Open Source Framework for 3D Velocity Model Research

    Science.gov (United States)

    Gill, D.; Maechling, P. J.; Jordan, T. H.; Plesch, A.; Taborda, R.; Callaghan, S.; Small, P.

    2013-12-01

    Three-dimensional (3D) seismic velocity models provide fundamental input data to ground motion simulations, in the form of structured or unstructured meshes or grids. Numerous models are available for California, as well as for other parts of the United States and Europe, but models do not share a common interface. Being able to interact with these models in a standardized way is critical in order to configure and run 3D ground motion simulations. The Unified Community Velocity Model (UCVM) software, developed by researchers at the Southern California Earthquake Center (SCEC), is an open source framework designed to provide a cohesive way to interact with seismic velocity models. We describe the several ways in which we have improved the UCVM software over the last year. We have simplified the UCVM installation process by automating the installation of various community codebases, improving the ease of use.. We discuss how UCVM software was used to build velocity meshes for high-frequency (4Hz) deterministic 3D wave propagation simulations, and how the UCVM framework interacts with other open source resources, such as NetCDF file formats for visualization. The UCVM software uses a layered software architecture that transparently converts geographic coordinates to the coordinate systems used by the underlying velocity models and supports inclusion of a configurable near-surface geotechnical layer, while interacting with the velocity model codes through their existing software interfaces. No changes to the velocity model codes are required. Our recent UCVM installation improvements bundle UCVM with a setup script, written in Python, which guides users through the process that installs the UCVM software along with all the user-selectable velocity models. Each velocity model is converted into a standardized (configure, make, make install) format that is easily downloaded and installed via the script. UCVM is often run in specialized high performance computing (HPC

  7. Reliable critical sized defect rodent model for cleft palate research.

    Science.gov (United States)

    Mostafa, Nesrine Z; Doschak, Michael R; Major, Paul W; Talwar, Reena

    2014-12-01

    Suitable animal models are necessary to test the efficacy of new bone grafting therapies in cleft palate surgery. Rodent models of cleft palate are available but have limitations. This study compared and modified mid-palate cleft (MPC) and alveolar cleft (AC) models to determine the most reliable and reproducible model for bone grafting studies. Published MPC model (9 × 5 × 3 mm(3)) lacked sufficient information for tested rats. Our initial studies utilizing AC model (7 × 4 × 3 mm(3)) in 8 and 16 weeks old Sprague Dawley (SD) rats revealed injury to adjacent structures. After comparing anteroposterior and transverse maxillary dimensions in 16 weeks old SD and Wistar rats, virtual planning was performed to modify MPC and AC defects dimensions, taking the adjacent structures into consideration. Modified MPC (7 × 2.5 × 1 mm(3)) and AC (5 × 2.5 × 1 mm(3)) defects were employed in 16 weeks old Wistar rats and healing was monitored by micro-computed tomography and histology. Maxillary dimensions in SD and Wistar rats were not significantly different. Preoperative virtual planning enhanced postoperative surgical outcomes. Bone healing occurred at defect margin leaving central bone void confirming the critical size nature of the modified MPC and AC defects. Presented modifications for MPC and AC models created clinically relevant and reproducible defects. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  8. Animal models of surgically manipulated flow velocities to study shear stress-induced atherosclerosis.

    Science.gov (United States)

    Winkel, Leah C; Hoogendoorn, Ayla; Xing, Ruoyu; Wentzel, Jolanda J; Van der Heiden, Kim

    2015-07-01

    Atherosclerosis is a chronic inflammatory disease of the arterial tree that develops at predisposed sites, coinciding with locations that are exposed to low or oscillating shear stress. Manipulating flow velocity, and concomitantly shear stress, has proven adequate to promote endothelial activation and subsequent plaque formation in animals. In this article, we will give an overview of the animal models that have been designed to study the causal relationship between shear stress and atherosclerosis by surgically manipulating blood flow velocity profiles. These surgically manipulated models include arteriovenous fistulas, vascular grafts, arterial ligation, and perivascular devices. We review these models of manipulated blood flow velocity from an engineering and biological perspective, focusing on the shear stress profiles they induce and the vascular pathology that is observed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Usage models in reliability assessment of software-based systems

    Energy Technology Data Exchange (ETDEWEB)

    Haapanen, P.; Pulkkinen, U. [VTT Automation, Espoo (Finland); Korhonen, J. [VTT Electronics, Espoo (Finland)

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.).

  10. Reliability model for offshore wind farms; Paalidelighedsmodel for havvindmoelleparker

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, P.; Lundtang Paulsen, J.; Lybech Toegersen, M.; Krogh, T. [Risoe National Lab., Roskilde (Denmark); Raben, N.; Donovan, M.H.; Joergensen, L. [SEAS (Denmark); Winther-Jensen, M.

    2002-05-01

    A method for the prediction of the mean availability for an offshore windfarm has been developed. Factors comprised are the reliability of the single turbine, the strategy for preventive maintenance the climate, the number of repair teams, and the type of boats available for transport. The mean availability is defined as the sum of the fractions of time, where each turbine is available for production. The project has been carried out together with SEAS Wind Technique, and their site Roedsand has been chosen as the example of the work. A climate model has been created based on actual site measurements. The prediction of the availability is done with a Monte Carlo-simulation. Software was developed for the preparation of the climate model from weather measurements as well as for the Monte carlo-simulation. Three examples have been simulated, one with guessed parametres, and the other two with parameters more close to the Roedsand case. (au)

  11. Usage models in reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Haapanen, P.; Pulkkinen, U.; Korhonen, J.

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.)

  12. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    Science.gov (United States)

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  13. Simultaneous inversion for hypocenters and lateral velocity variation: An iterative solution with a layered model

    Energy Technology Data Exchange (ETDEWEB)

    Hawley, B.W.; Zandt, G.; Smith, R.B.

    1981-08-10

    An iterative inversion technique has been developed that uses the direct P and S wave arrival times from local earthquakes to compute simultaneously a three-dimensional velocity structure and relocated hypocenters. Crustal structure is modeled by subdiving flat layers into rectangular blocks. An interpolation function is used to smoothly vary velocities between blocks, allowing ray trace calculations of travel times in a three-dimensional medium. Tests using synthetic data from known models show that solutions are reasonably independent of block size and spatial distribution but are sensitive to the choice of layer thicknesses. Application of the technique to observed earthquake data from north-central Utah shown the following: (1) lateral velcoity variations in the crust as large as 7% occur over 30-km distance, (2) earthquake epicenters computed with the three-dimensional velocity structure were shifted an average of 3.0 km from location determined assuming homogeneous flat layered models, and (3) the laterally varying velocity structure correlates with anomalous variations in the local gravity and aeromagnetic fields, suggesting that the new velocity information can be valuable in acquiring a better understanding of crustal structure.

  14. Calculation of pressure gradients from MR velocity data in a laminar flow model

    International Nuclear Information System (INIS)

    Adler, R.S.; Chenevert, T.L.; Fowlkes, J.B.; Pipe, J.G.; Rubin, J.M.

    1990-01-01

    This paper reports on the ability of current imaging modalities to provide velocity-distribution data that offers the possibility of noninvasive pressure-gradient determination from an appropriate rheologic model of flow. A simple laminar flow model is considered at low Reynolds number, RE calc = 0.59 + (1.13 x (dp/dz) meas ), R 2 = .994, in units of dyne/cm 2 /cm for the range of flows considered. The authors' results indicate the potential usefulness of noninvasive pressure-gradient determinations from quantitative analysis of imaging-derived velocity data

  15. Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions

    Science.gov (United States)

    Kim, A.; Dreger, D.; Larsen, S.

    2008-12-01

    We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0

  16. Should tsunami models use a nonzero initial condition for horizontal velocity?

    Science.gov (United States)

    Nava, G.; Lotto, G. C.; Dunham, E. M.

    2017-12-01

    Tsunami propagation in the open ocean is most commonly modeled by solving the shallow water wave equations. These equations require two initial conditions: one on sea surface height and another on depth-averaged horizontal particle velocity or, equivalently, horizontal momentum. While most modelers assume that initial velocity is zero, Y.T. Song and collaborators have argued for nonzero initial velocity, claiming that horizontal displacement of a sloping seafloor imparts significant horizontal momentum to the ocean. They show examples in which this effect increases the resulting tsunami height by a factor of two or more relative to models in which initial velocity is zero. We test this claim with a "full-physics" integrated dynamic rupture and tsunami model that couples the elastic response of the Earth to the linearized acoustic-gravitational response of a compressible ocean with gravity; the model self-consistently accounts for seismic waves in the solid Earth, acoustic waves in the ocean, and tsunamis (with dispersion at short wavelengths). We run several full-physics simulations of subduction zone megathrust ruptures and tsunamis in geometries with a sloping seafloor, using both idealized structures and a more realistic Tohoku structure. Substantial horizontal momentum is imparted to the ocean, but almost all momentum is carried away in the form of ocean acoustic waves. We compare tsunami propagation in each full-physics simulation to that predicted by an equivalent shallow water wave simulation with varying assumptions regarding initial conditions. We find that the initial horizontal velocity conditions proposed by Song and collaborators consistently overestimate the tsunami amplitude and predict an inconsistent wave profile. Finally, we determine tsunami initial conditions that are rigorously consistent with our full-physics simulations by isolating the tsunami waves (from ocean acoustic and seismic waves) at some final time, and backpropagating the tsunami

  17. Modeling continuous seismic velocity changes due to ground shaking in Chile

    Science.gov (United States)

    Gassenmeier, Martina; Richter, Tom; Sens-Schönfelder, Christoph; Korn, Michael; Tilmann, Frederik

    2015-04-01

    In order to investigate temporal seismic velocity changes due to earthquake related processes and environmental forcing, we analyze 8 years of ambient seismic noise recorded by the Integrated Plate Boundary Observatory Chile (IPOC) network in northern Chile between 18° and 25° S. The Mw 7.7 Tocopilla earthquake in 2007 and the Mw 8.1 Iquique earthquake in 2014 as well as numerous smaller events occurred in this area. By autocorrelation of the ambient seismic noise field, approximations of the Green's functions are retrieved. The recovered function represents backscattered or multiply scattered energy from the immediate neighborhood of the station. To detect relative changes of the seismic velocities we apply the stretching method, which compares individual autocorrelation functions to stretched or compressed versions of a long term averaged reference autocorrelation function. We use time windows in the coda of the autocorrelations, that contain scattered waves which are highly sensitive to minute changes in the velocity. At station PATCX we observe seasonal changes in seismic velocity as well as temporary velocity reductions in the frequency range of 4-6 Hz. The seasonal changes can be attributed to thermal stress changes in the subsurface related to variations of the atmospheric temperature. This effect can be modeled well by a sine curve and is subtracted for further analysis of short term variations. Temporary velocity reductions occur at the time of ground shaking usually caused by earthquakes and are followed by a recovery. We present an empirical model that describes the seismic velocity variations based on continuous observations of the local ground acceleration. Our hypothesis is that not only the shaking of earthquakes provokes velocity drops, but any small vibrations continuously induce minor velocity variations that are immediately compensated by healing in the steady state. We show that the shaking effect is accumulated over time and best described by

  18. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  19. Reliability assessment using degradation models: bayesian and classical approaches

    Directory of Open Access Journals (Sweden)

    Marta Afonso Freitas

    2010-04-01

    Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

  20. Agradient velocity, vortical motion and gravity waves in a rotating shallow-water model

    Science.gov (United States)

    Sutyrin Georgi, G.

    2004-07-01

    A new approach to modelling slow vortical motion and fast inertia-gravity waves is suggested within the rotating shallow-water primitive equations with arbitrary topography. The velocity is exactly expressed as a sum of the gradient wind, described by the Bernoulli function,B, and the remaining agradient part, proportional to the velocity tendency. Then the equation for inverse potential vorticity,Q, as well as momentum equations for agradient velocity include the same source of intrinsic flow evolution expressed as a single term J (B, Q), where J is the Jacobian operator (for any steady state J (B, Q) = 0). Two components of agradient velocity are responsible for the fast inertia-gravity wave propagation similar to the traditionally used divergence and ageostrophic vorticity. This approach allows for the construction of balance relations for vortical dynamics and potential vorticity inversion schemes even for moderate Rossby and Froude numbers assuming the characteristic value of |J(B, Q)| = to be small. The components of agradient velocity are used as the fast variables slaved to potential vorticity that allows for diagnostic estimates of the velocity tendency, the direct potential vorticity inversion with the accuracy of 2 and the corresponding potential vorticity-conserving agradient velocity balance model (AVBM). The ultimate limitations of constructing the balance are revealed in the form of the ellipticity condition for balanced tendency of the Bernoulli function which incorporates both known criteria of the formal stability: the gradient wind modified by the characteristic vortical Rossby wave phase speed should be subcritical. The accuracy of the AVBM is illustrated by considering the linear normal modes and coastal Kelvin waves in the f-plane channel with topography.

  1. A California statewide three-dimensional seismic velocity model from both absolute and differential times

    Science.gov (United States)

    Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.

    2010-01-01

    We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.

  2. A model of the instantaneous pressure-velocity relationships of the neonatal cerebral circulation.

    Science.gov (United States)

    Panerai, R B; Coughtrey, H; Rennie, J M; Evans, D H

    1993-11-01

    The instantaneous relationship between arterial blood pressure (BP) and cerebral blood flow velocity (CBFV), measured with Doppler ultrasound in the anterior cerebral artery, is represented by a vascular waterfall model comprising vascular resistance, compliance, and critical closing pressure. One min recordings obtained from 61 low birth weight newborns were fitted to the model using a least-squares procedures with correction for the time delay between the BP and CBFV signals. A sensitivity analysis was performed to study the effects of low-pass filtering (LPF), cutoff frequency, and noise on the estimated parameters of the model. Results indicate excellent fitting of the model (F-test, p model parameters have a mean correlation coefficient of 0.94 with the measured flow velocity tracing (N = 232 epochs). The model developed can be useful for interpreting clinical findings and as a framework for research into cerebral autoregulation.

  3. New Models for Velocity/Pressure-Gradient Correlations in Turbulent Boundary Layers

    Science.gov (United States)

    Poroseva, Svetlana; Murman, Scott

    2014-11-01

    To improve the performance of Reynolds-Averaged Navier-Stokes (RANS) turbulence models, one has to improve the accuracy of models for three physical processes: turbulent diffusion, interaction of turbulent pressure and velocity fluctuation fields, and dissipative processes. The accuracy of modeling the turbulent diffusion depends on the order of a statistical closure chosen as a basis for a RANS model. When the Gram-Charlier series expansions for the velocity correlations are used to close the set of RANS equations, no assumption on Gaussian turbulence is invoked and no unknown model coefficients are introduced into the modeled equations. In such a way, this closure procedure reduces the modeling uncertainty of fourth-order RANS (FORANS) closures. Experimental and direct numerical simulation data confirmed the validity of using the Gram-Charlier series expansions in various flows including boundary layers. We will address modeling the velocity/pressure-gradient correlations. New linear models will be introduced for the second- and higher-order correlations applicable to two-dimensional incompressible wall-bounded flows. Results of models' validation with DNS data in a channel flow and in a zero-pressure gradient boundary layer over a flat plate will be demonstrated. A part of the material is based upon work supported by NASA under award NNX12AJ61A.

  4. A Dialogue about MCQs, Reliability, and Item Response Modelling

    Science.gov (United States)

    Wright, Daniel B.; Skagerberg, Elin M.

    2006-01-01

    Multiple choice questions (MCQs) are becoming more common in UK psychology departments and the need to assess their reliability is apparent. Having examined the reliability of MCQs in our department we faced many questions from colleagues about why we were examining reliability, what it was that we were doing, and what should be reported when…

  5. Velocity Model Analysis Based on Integrated Well and Seismic Data of East Java Basin

    Science.gov (United States)

    Mubin, Fathul; Widya, Aviandy; Eka Nurcahya, Budi; Nurul Mahmudah, Erma; Purwaman, Indro; Radityo, Aryo; Shirly, Agung; Nurwani, Citra

    2018-03-01

    Time to depth conversion is an important processof seismic interpretationtoidentify hydrocarbonprospectivity. Main objectives of this research are to minimize the risk of error in geometry and time to depth conversion. Since it’s using a large amount of data and had been doing in the large scale of research areas, this research can be classified as a regional scale research. The research was focused on three horizons time interpretation: Top Kujung I, Top Ngimbang and Basement which located in the offshore and onshore areas of east Java basin. These three horizons was selected because they were assumed to be equivalent to the rock formation, which is it has always been the main objective of oil and gas exploration in the East Java Basin. As additional value, there was no previous works on velocity modeling for regional scale using geological parameters in East Java basin. Lithology and interval thickness were identified as geological factors that effected the velocity distribution in East Java Basin. Therefore, a three layer geological model was generated, which was defined by the type of lithology; carbonate (layer 1: Top Kujung I), shale (layer 2: Top Ngimbang) and Basement. A statistical method using three horizons is able to predict the velocity distribution on sparse well data in a regional scale. The average velocity range for Top Kujung I is 400 m/s - 6000 m/s, Top Ngimbang is 500 m/s - 8200 m/s and Basement is 600 m/s - 8000 m/s. Some velocity anomalies found in Madura sub-basin area, caused by geological factor which identified as thick shale deposit and high density values on shale. Result of velocity and depth modeling analysis can be used to define the volume range deterministically and to make geological models to prospect generation in details by geological concept.

  6. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  7. Milgrom Relation Models for Spiral Galaxies from Two-Dimensional Velocity Maps

    OpenAIRE

    Barnes, Eric I.; Kosowsky, Arthur; Sellwood, Jerry A.

    2007-01-01

    Using two-dimensional velocity maps and I-band photometry, we have created mass models of 40 spiral galaxies using the Milgrom relation (the basis of modified Newtonian dynamics, or MOND) to complement previous work. A Bayesian technique is employed to compare several different dark matter halo models to Milgrom and Newtonian models. Pseudo-isothermal dark matter halos provide the best statistical fits to the data in a majority of cases, while the Milgrom relation generally provides good fits...

  8. Microthrix parvicella abundance associates with activated sludge settling velocity and rheology - Quantifying and modelling filamentous bulking.

    Science.gov (United States)

    Wágner, Dorottya S; Ramin, Elham; Szabo, Peter; Dechesne, Arnaud; Plósz, Benedek Gy

    2015-07-01

    The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through altering the hindered settling velocity and yield stress parameter. Strikingly, this is not the case for Chloroflexi, occurring in more than double the abundance of M. parvicella, and forming filaments primarily protruding from the flocs. The transient and compression settling parameters show a comparably high variability, and no significant association with filamentous abundance. A two-dimensional, axi-symmetrical computational fluid dynamics (CFD) model was used to assess calibration scenarios to model filamentous bulking. Our results suggest that model predictions can significantly benefit from explicitly accounting for filamentous bulking by calibrating the hindered settling velocity function. Furthermore, accounting for the transient and compression settling velocity in the computational domain is crucial to improve model accuracy when modelling filamentous bulking. However, the case-specific calibration of transient and compression settling parameters as well as yield stress is not necessary, and an average parameter set - obtained under bulking and good settling

  9. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  10. A First Layered Crustal Velocity Model for the Western Solomon Islands: Inversion of Measured Group Velocity of Surface Waves using Ambient Noise Cross-Correlation

    Science.gov (United States)

    Ku, C. S.; Kuo, Y. T.; Chao, W. A.; You, S. H.; Huang, B. S.; Chen, Y. G.; Taylor, F. W.; Yih-Min, W.

    2017-12-01

    Two earthquakes, MW 8.1 in 2007 and MW 7.1 in 2010, hit the Western Province of Solomon Islands and caused extensive damage, but motivated us to set up the first seismic network in this area. During the first phase, eight broadband seismic stations (BBS) were installed around the rupture zone of 2007 earthquake. With one-year seismic records, we cross-correlated the vertical component of ambient noise recorded in our BBS and calculated Rayleigh-wave group velocity dispersion curves on inter-station paths. The genetic algorithm to invert one-dimensional crustal velocity model is applied by fitting the averaged dispersion curves. The one-dimensional crustal velocity model is constituted by two layers and one half-space, representing the upper crust, lower crust, and uppermost mantle respectively. The resulted thickness values of the upper and lower crust are 6.4 and 14.2 km, respectively. Shear-wave velocities (VS) of the upper crust, lower crust, and uppermost mantle are 2.53, 3.57 and 4.23 km/s with the VP/VS ratios of 1.737, 1.742 and 1.759, respectively. This first layered crustal velocity model can be used as a preliminary reference to further study seismic sources such as earthquake activity and tectonic tremor.

  11. One kind of atmosphere-ocean three layer model for calculating the velocity of ocean current

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Z; Xi, P

    1979-10-01

    A three-layer atmosphere-ocean model is given in this paper to calcuate the velocity of ocean current, particularly the function of the vertical coordinate, taking into consideratiln (1) the atmospheric effect on the generation of ocean current, (2) a calculated coefficient of the eddy viscosity instead of an assumed one, and (3) the sea which actually varies in depth.

  12. Ab initio calculation of the sound velocity of dense hydrogen: implications for models of Jupiter

    NARCIS (Netherlands)

    Alavi, A.; Parrinello, M.; Frenkel, D.

    1995-01-01

    First-principles molecular dynamics simulations were used to calculate the sound velocity of dense hydrogen, and the results were compared with extrapolations of experimental data that currently conflict with either astrophysical models or data obtained from recent global oscillation measurements of

  13. Do Assimilated Drifter Velocities Improve Lagrangian Predictability in an Operational Ocean Model?

    Science.gov (United States)

    2015-05-01

    extended Kalman filter . Molcard et al. (2005) used a statistical method to cor- relate model and drifter velocities. Taillandier et al. (2006) describe the... temperature and salinity observations. Trajectory angular differ- ences are also reduced. 1. Introduction The importance of Lagrangian forecasts was seen... Temperature , salinity, and sea surface height (SSH, measured along-track by satellite altimeters) observa- tions are typically assimilated in

  14. Analytical models for predicting the ion velocity distributions in JET in the presence of ICRF heating

    International Nuclear Information System (INIS)

    Anderson, A.; Eriksson, L.G.; Lisak, M.

    1986-01-01

    The present report summarizes the work performed within the contract JT4/9008, the aim of which is to derive analytical models for ion velocity distributions resulting from ICRF heating on JET. The work has been performed over a two-year-period ending in August 1986 and has involved a total effort of 2.4 man years. (author)

  15. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: I. Model and velocity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, J [Cardiovascular Research Group Physics, University of New England, Armidale, NSW 2351 (Australia); Buick, J M [Department of Mechanical and Design Engineering, University of Portsmouth, Anglesea Building, Anglesea Road, Portsmouth PO1 3DJ (United Kingdom)

    2008-10-21

    Numerical modelling is a powerful tool in the investigation of human blood flow and arterial diseases such as atherosclerosis. It is known that near wall velocity and shear are important in the pathogenesis and progression of atherosclerosis. In this paper results for a simulation of blood flow in a three-dimensional carotid artery geometry using the lattice Boltzmann method are presented. The velocity fields in the body of the fluid are analysed at six times of interest during a physiologically accurate velocity waveform. It is found that the three-dimensional model agrees well with previous literature results for carotid artery flow. Regions of low near wall velocity and circulatory flow are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery, which are regions that are typically prone to atherosclerosis.

  16. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: I. Model and velocity analysis

    International Nuclear Information System (INIS)

    Boyd, J; Buick, J M

    2008-01-01

    Numerical modelling is a powerful tool in the investigation of human blood flow and arterial diseases such as atherosclerosis. It is known that near wall velocity and shear are important in the pathogenesis and progression of atherosclerosis. In this paper results for a simulation of blood flow in a three-dimensional carotid artery geometry using the lattice Boltzmann method are presented. The velocity fields in the body of the fluid are analysed at six times of interest during a physiologically accurate velocity waveform. It is found that the three-dimensional model agrees well with previous literature results for carotid artery flow. Regions of low near wall velocity and circulatory flow are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery, which are regions that are typically prone to atherosclerosis.

  17. 3D Crustal Velocity Structure Model of the Middle-eastern North China Craton

    Science.gov (United States)

    Duan, Y.; Wang, F.; Lin, J.; Wei, Y.

    2017-12-01

    Lithosphere thinning and destruction in the middle-eastern North China Craton (NCC), a region susceptible to strong earthquakes, is one of the research hotspots in solid earth science. Up to 42 wide-angle reflection/refraction deep seismic sounding (DSS) profiles have been completed in the middle-eastern NCC, we collect all the 2D profiling results and perform gridding of the velocity and interface depth data, and build a 3D crustal velocity structure model for the middle-eastern NCC, named HBCrust1.0, using the Kriging interpolation method. In this model, four layers are divided by three interfaces: G is the interface between the sedimentary cover and crystalline crust, with velocities of 5.0-5.5 km/s above and 5.8-6.0 km/s below. C is the interface of the upper and lower crust, with velocity jump from 6.2-6.4 km/s to 6.5-6.6 km/s. M is the interface between the crust and upper mantle, with velocity 6.7-7.0 km/s at the crust bottom and 7.9-8.0 km/s on mantle top. Our results show that the first arrival time calculated from HBCust1.0 fit well with the observation. It also demonstrates that the upper crust is the main seismogenic layer, and the brittle-ductile transition occurs at depths near interface C. The depth of interface Moho varies beneath the source area of the Tangshan earth-quake, and a low-velocity structure is found to extend from the source area to the lower crust. Based on these observations, it can be inferred that stress accumulation responsible for the Tangshan earthquake may have been closely related to the migration and deformation of the mantle materials. Comparisons of the average velocities of the whole crust, the upper and the lower crust show that the average velocity of the lower crust under the central part of the North China Basin (NCB) in the east of the craton is obviously higher than the regional average, this high-velocity probably results from longterm underplating of the mantle magma. This research is founded by the Natural Science

  18. Modeling high-Power Accelerators Reliability-SNS LINAC (SNS-ORNL); MAX LINAC (MYRRHA)

    International Nuclear Information System (INIS)

    Pitigoi, A. E.; Fernandez Ramos, P.

    2013-01-01

    Improving reliability has recently become a very important objective in the field of particle accelerators. The particle accelerators in operation are constantly undergoing modifications, and improvements are implemented using new technologies, more reliable components or redundant schemes (to obtain more reliability, strength, more power, etc.) A reliability model of SNS (Spallation Neutron Source) LINAC has been developed within MAX project and analysis of the accelerator systems reliability has been performed within the MAX project, using the Risk Spectrum reliability analysis software. The analysis results have been evaluated by comparison with the SNS operational data. Results and conclusions are presented in this paper, oriented to identify design weaknesses and provide recommendations for improving reliability of MYRRHA linear accelerator. The SNS reliability model developed for the MAX preliminary design phase indicates possible avenues for further investigation that could be needed to improve the reliability of the high-power accelerators, in view of the future reliability targets of ADS accelerators.

  19. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    OpenAIRE

    Chassin, David P.; Posse, Christian

    2004-01-01

    The reliability of electric transmission systems is examined using a scale-free model of network structure and failure propagation. The topologies of the North American eastern and western electric networks are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using s...

  20. Modelling of two-phase flow based on separation of the flow according to velocity

    Energy Technology Data Exchange (ETDEWEB)

    Narumo, T. [VTT Energy, Espoo (Finland). Nuclear Energy

    1997-12-31

    The thesis concentrates on the development work of a physical one-dimensional two-fluid model that is based on Separation of the Flow According to Velocity (SFAV). The conventional way to model one-dimensional two-phase flow is to derive conservation equations for mass, momentum and energy over the regions occupied by the phases. In the SFAV approach, the two-phase mixture is divided into two subflows, with as distinct average velocities as possible, and momentum conservation equations are derived over their domains. Mass and energy conservation are treated equally with the conventional model because they are distributed very accurately according to the phases, but momentum fluctuations follow better the flow velocity. Submodels for non-uniform transverse profile of velocity and density, slip between the phases within each subflow and turbulence between the subflows have been derived. The model system is hyperbolic in any sensible flow conditions over the whole range of void fraction. Thus, it can be solved with accurate numerical methods utilizing the characteristics. The characteristics agree well with the used experimental data on two-phase flow wave phenomena Furthermore, the characteristics of the SFAV model are as well in accordance with their physical counterparts as of the best virtual-mass models that are typically optimized for special flow regimes like bubbly flow. The SFAV model has proved to be applicable in describing two-phase flow physically correctly because both the dynamics and steady-state behaviour of the model has been considered and found to agree well with experimental data This makes the SFAV model especially suitable for the calculation of fast transients, taking place in versatile form e.g. in nuclear reactors. 45 refs. The thesis includes also five previous publications by author.

  1. Modelling of two-phase flow based on separation of the flow according to velocity

    International Nuclear Information System (INIS)

    Narumo, T.

    1997-01-01

    The thesis concentrates on the development work of a physical one-dimensional two-fluid model that is based on Separation of the Flow According to Velocity (SFAV). The conventional way to model one-dimensional two-phase flow is to derive conservation equations for mass, momentum and energy over the regions occupied by the phases. In the SFAV approach, the two-phase mixture is divided into two subflows, with as distinct average velocities as possible, and momentum conservation equations are derived over their domains. Mass and energy conservation are treated equally with the conventional model because they are distributed very accurately according to the phases, but momentum fluctuations follow better the flow velocity. Submodels for non-uniform transverse profile of velocity and density, slip between the phases within each subflow and turbulence between the subflows have been derived. The model system is hyperbolic in any sensible flow conditions over the whole range of void fraction. Thus, it can be solved with accurate numerical methods utilizing the characteristics. The characteristics agree well with the used experimental data on two-phase flow wave phenomena Furthermore, the characteristics of the SFAV model are as well in accordance with their physical counterparts as of the best virtual-mass models that are typically optimized for special flow regimes like bubbly flow. The SFAV model has proved to be applicable in describing two-phase flow physically correctly because both the dynamics and steady-state behaviour of the model has been considered and found to agree well with experimental data This makes the SFAV model especially suitable for the calculation of fast transients, taking place in versatile form e.g. in nuclear reactors

  2. Modification of Spalart-Allmaras model with consideration of turbulence energy backscatter using velocity helicity

    International Nuclear Information System (INIS)

    Liu, Yangwei; Lu, Lipeng; Fang, Le; Gao, Feng

    2011-01-01

    The correlation between the velocity helicity and the energy backscatter is proved in a DNS case of 256 3 -grid homogeneous isotropic decaying turbulence. The helicity is then proposed to be employed to improve turbulence models and SGS models. Then Spalart-Allmaras turbulence model (SA) is modified with the helicity to take account of the energy backscatter, which is significant in the region of corner separation in compressors. By comparing the numerical results with experiments, it can be concluded that the modification for SA model with helicity can appropriately represent the energy backscatter, and greatly improves the predictive accuracy for simulating the corner separation flow in compressors. -- Highlights: → We study the relativity between the velocity helicity and the energy backscatter. → Spalart-Allmaras turbulence model is modified with the velocity helicity. → The modified model is employed to simulate corner separation in compressor cascade. → The modification can greatly improve the accuracy for predicting corner separation. → The helicity can represent the energy backscatter in turbulence and SGS models.

  3. Numerical Material Model for Composite Laminates in High-Velocity Impact Simulation

    Directory of Open Access Journals (Sweden)

    Tao Liu

    Full Text Available Abstract A numerical material model for composite laminate, was developed and integrated into the nonlinear dynamic explicit finite element programs as a material user subroutine. This model coupling nonlinear state of equation (EOS, was a macro-mechanics model, which was used to simulate the major mechanical behaviors of composite laminate under high-velocity impact conditions. The basic theoretical framework of the developed material model was introduced. An inverse flyer plate simulation was conducted, which demonstrated the advantage of the developed model in characterizing the nonlinear shock response. The developed model and its implementation were validated through a classic ballistic impact issue, i.e. projectile impacting on Kevlar29/Phenolic laminate. The failure modes and ballistic limit velocity were analyzed, and a good agreement was achieved when comparing with the analytical and experimental results. The computational capacity of this model, for Kevlar/Epoxy laminates with different architectures, i.e. plain-woven and cross-plied laminates, was further evaluated and the residual velocity curves and damage cone were accurately predicted.

  4. Minimum 1D P wave velocity model for the Cordillera Volcanica de Guanacaste, Costa Rica

    International Nuclear Information System (INIS)

    Araya, Maria C.; Linkimer, Lepolt; Taylor, Waldo

    2016-01-01

    A minimum velocity model is derived from 475 local earthquakes registered by the Observatorio Vulcanologico y Sismologico Arenal Miravalles (OSIVAM) for the Cordillera Volcanica de Guanacaste, between January 2006 and July 2014. The model has consisted of six layers from the surface up to 80 km the depth. The model has presented speeds varying between 3,96 and 7,79 km/s. The corrections obtained from the seismic stations have varied between -0,28 to 0,45, and they have shown a trend of positive values on the volcanic arc and negative on the forearc, in concordance with the crustal thickness. The relocation of earthquakes have presented three main groups of epicenters that could be associated with activity in inferred failures. The minimum ID velocity model has provided a simplified idea of the crustal structure and aims to contribute with the improvement of the routine location of earthquakes performed by OSIVAM. (author) [es

  5. The large-scale peculiar velocity field in flat models of the universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Turner, M.S.

    1986-10-01

    The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs

  6. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    Science.gov (United States)

    Duffy, Stephen F.

    1997-01-01

    Al single crystal turbine blade material; map a simplistic failure strength envelope of the material; develop a statistically based reliability computer algorithm, verify the reliability model and computer algorithm, and model stator vanes for rig tests. Thus establishing design protocols that enable the engineer to analyze and predict the mechanical behavior of ceramic composites and intermetallics would mitigate the prototype (trial and error) approach currently used by the engineering community. The primary objective of the research effort supported by this short term grant is the continued creation of enabling technologies for the macroanalysis of components fabricated from ceramic composites and intermetallic material systems. The creation of enabling technologies aids in shortening the product development cycle of components fabricated from the new high technology materials.

  7. Spectral analysis of surface waves method to assess shear wave velocity within centrifuge models

    OpenAIRE

    MURILLO, Carol Andrea; THOREL, Luc; CAICEDO, Bernardo

    2009-01-01

    The method of the spectral analysis of surface waves (SASW) is tested out on reduced scale centrifuge models, with a specific device, called the mini Falling Weight, developed for this purpose. Tests are performed on layered materials made of a mixture of sand and clay. The shear wave velocity VS determined within the models using the SASW is compared with the laboratory measurements carried out using the bender element test. The results show that the SASW technique applied to centrifuge test...

  8. A model for the two-point velocity correlation function in turbulent channel flow

    International Nuclear Information System (INIS)

    Sahay, A.; Sreenivasan, K.R.

    1996-01-01

    A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics

  9. Velocity Model for CO2 Sequestration in the Southeastern United States Atlantic Continental Margin

    Science.gov (United States)

    Ollmann, J.; Knapp, C. C.; Almutairi, K.; Almayahi, D.; Knapp, J. H.

    2017-12-01

    The sequestration of carbon dioxide (CO2) is emerging as a major player in offsetting anthropogenic greenhouse gas emissions. With 40% of the United States' anthropogenic CO2 emissions originating in the southeast, characterizing potential CO2 sequestration sites is vital to reducing the United States' emissions. The goal of this research project, funded by the Department of Energy (DOE), is to estimate the CO2 storage potential for the Southeastern United States Atlantic Continental Margin. Previous studies find storage potential in the Atlantic continental margin. Up to 16 Gt and 175 Gt of storage potential are estimated for the Upper Cretaceous and Lower Cretaceous formations, respectively. Considering 2.12 Mt of CO2 are emitted per year by the United States, substantial storage potential is present in the Southeastern United States Atlantic Continental Margin. In order to produce a time-depth relationship, a velocity model must be constructed. This velocity model is created using previously collected seismic reflection, refraction, and well data in the study area. Seismic reflection horizons were extrapolated using well log data from the COST GE-1 well. An interpolated seismic section was created using these seismic horizons. A velocity model will be made using P-wave velocities from seismic reflection data. Once the time-depth conversion is complete, the depths of stratigraphic units in the seismic refraction data will be compared to the newly assigned depths of the seismic horizons. With a lack of well control in the study area, the addition of stratigraphic unit depths from 171 seismic refraction recording stations provides adequate data to tie to the depths of picked seismic horizons. Using this velocity model, the seismic reflection data can be presented in depth in order to estimate the thickness and storage potential of CO2 reservoirs in the Southeastern United States Atlantic Continental Margin.

  10. Development of a State-Wide 3-D Seismic Tomography Velocity Model for California

    Science.gov (United States)

    Thurber, C. H.; Lin, G.; Zhang, H.; Hauksson, E.; Shearer, P.; Waldhauser, F.; Hardebeck, J.; Brocher, T.

    2007-12-01

    We report on progress towards the development of a state-wide tomographic model of the P-wave velocity for the crust and uppermost mantle of California. The dataset combines first arrival times from earthquakes and quarry blasts recorded on regional network stations and travel times of first arrivals from explosions and airguns recorded on profile receivers and network stations. The principal active-source datasets are Geysers-San Pablo Bay, Imperial Valley, Livermore, W. Mojave, Gilroy-Coyote Lake, Shasta region, Great Valley, Morro Bay, Mono Craters-Long Valley, PACE, S. Sierras, LARSE 1 and 2, Loma Prieta, BASIX, San Francisco Peninsula and Parkfield. Our beta-version model is coarse (uniform 30 km horizontal and variable vertical gridding) but is able to image the principal features in previous separate regional models for northern and southern California, such as the high-velocity subducting Gorda Plate, upper to middle crustal velocity highs beneath the Sierra Nevada and much of the Coast Ranges, the deep low-velocity basins of the Great Valley, Ventura, and Los Angeles, and a high- velocity body in the lower crust underlying the Great Valley. The new state-wide model has improved areal coverage compared to the previous models, and extends to greater depth due to the data at large epicentral distances. We plan a series of steps to improve the model. We are enlarging and calibrating the active-source dataset as we obtain additional picks from investigators and perform quality control analyses on the existing and new picks. We will also be adding data from more quarry blasts, mainly in northern California, following an identification and calibration procedure similar to Lin et al. (2006). Composite event construction (Lin et al., in press) will be carried out for northern California for use in conventional tomography. A major contribution of the state-wide model is the identification of earthquakes yielding arrival times at both the Northern California Seismic

  11. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    Science.gov (United States)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  12. Shear-wave velocity models and seismic sources in Campanian volcanic areas: Vesuvius and Phlegraean fields

    Energy Technology Data Exchange (ETDEWEB)

    Guidarelli, M; Zille, A; Sarao, A [Dipartimento di Scienze della Terra, Universita degli Studi di Trieste, Trieste (Italy); Natale, M; Nunziata, C [Dipartimento di Geofisica e Vulcanologia, Universita di Napoli ' Federico II' , Napoli (Italy); Panza, G F [Dipartimento di Scienze della Terra, Universita degli Studi di Trieste, Trieste (Italy); Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)

    2006-12-15

    This chapter summarizes a comparative study of shear-wave velocity models and seismic sources in the Campanian volcanic areas of Vesuvius and Phlegraean Fields. These velocity models were obtained through the nonlinear inversion of surface-wave tomography data, using as a priori constraints the relevant information available in the literature. Local group velocity data were obtained by means of the frequency-time analysis for the time period between 0.3 and 2 s and were combined with the group velocity data for the time period between 10 and 35 s from the regional events located in the Italian peninsula and bordering areas and two station phase velocity data corresponding to the time period between 25 and 100 s. In order to invert Rayleigh wave dispersion curves, we applied the nonlinear inversion method called hedgehog and retrieved average models for the first 30-35 km of the lithosphere, with the lower part of the upper mantle being kept fixed on the basis of existing regional models. A feature that is common to the two volcanic areas is a low shear velocity layer which is centered at the depth of about 10 km, while on the outside of the cone and along a path in the northeastern part of the Vesuvius area this layer is absent. This low velocity can be associated with the presence of partial melting and, therefore, may represent a quite diffused crustal magma reservoir which is fed by a deeper one that is regional in character and located in the uppermost mantle. The study of seismic source in terms of the moment tensor is suitable for an investigation of physical processes within a volcano; indeed, its components, double couple, compensated linear vector dipole, and volumetric, can be related to the movements of magma and fluids within the volcanic system. Although for many recent earthquake events the percentage of double couple component is high, our results also show the presence of significant non-double couple components in both volcanic areas. (author)

  13. Regional three-dimensional seismic velocity model of the crust and uppermost mantle of northern California

    Science.gov (United States)

    Thurber, C.; Zhang, H.; Brocher, T.; Langenheim, V.

    2009-01-01

    We present a three-dimensional (3D) tomographic model of the P wave velocity (Vp) structure of northern California. We employed a regional-scale double-difference tomography algorithm that incorporates a finite-difference travel time calculator and spatial smoothing constraints. Arrival times from earthquakes and travel times from controlled-source explosions, recorded at network and/or temporary stations, were inverted for Vp on a 3D grid with horizontal node spacing of 10 to 20 km and vertical node spacing of 3 to 8 km. Our model provides an unprecedented, comprehensive view of the regional-scale structure of northern California, putting many previously identified features into a broader regional context and improving the resolution of a number of them and revealing a number of new features, especially in the middle and lower crust, that have never before been reported. Examples of the former include the complex subducting Gorda slab, a steep, deeply penetrating fault beneath the Sacramento River Delta, crustal low-velocity zones beneath Geysers-Clear Lake and Long Valley, and the high-velocity ophiolite body underlying the Great Valley. Examples of the latter include mid-crustal low-velocity zones beneath Mount Shasta and north of Lake Tahoe. Copyright 2009 by the American Geophysical Union.

  14. Towards a new tool to develop a 3-D shear-wave velocity model from converted waves

    Science.gov (United States)

    Colavitti, Leonardo; Hetényi, György

    2017-04-01

    The main target of this work is to develop a new method in which we exploit converted waves to construct a fully 3-D shear-wave velocity model of the crust. A reliable 3-D model is very important in Earth sciences because geological structures may vary significantly in their lateral dimension. In particular, shear-waves provide valuable complementary information with respect to P-waves because they usually guarantee a much better correlation in terms of rock density and mechanical properties, reducing the interpretation ambiguities. Therefore, it is fundamental to develop a new technique to improve structural images and to describe different lithologies in the crust. In this study we start from the analysis of receiver functions (RF, Langston, 1977), which are nowadays largely used for structural investigations based on passive seismic experiments, to map Earth discontinuities at depth. The RF technique is also commonly used to invert for velocity structure beneath single stations. Here, we plan to combine two strengths of RF method: shear-wave velocity inversion and dense arrays. Starting from a simple 3-D forward model, synthetic RFs are obtained extracting the structure along a ray to match observed data. During the inversion, thanks to a dense stations network, we aim to build and develop a multi-layer crustal model for shear-wave velocity. The initial model should be chosen simple to make sure that the inversion process is not influenced by the constraints in terms of depth and velocity posed at the beginning. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999a, b), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter

  15. Approach for an integral power transformer reliability model

    NARCIS (Netherlands)

    Schijndel, van A.; Wouters, P.A.A.F.; Steennis, E.F.; Wetzer, J.M.

    2012-01-01

    In electrical power transmission and distribution networks power transformers represent a crucial group of assets both in terms of reliability and investments. In order to safeguard the required quality at acceptable costs, decisions must be based on a reliable forecast of future behaviour. The aim

  16. Wireless Channel Modeling Perspectives for Ultra-Reliable Communications

    DEFF Research Database (Denmark)

    Eggers, Patrick Claus F.; Popovski, Petar

    2018-01-01

    Ultra-Reliable Communication (URC) is one of the distinctive features of the upcoming 5G wireless communication. The level of reliability, going down to packet error rates (PER) of $10^{-9}$, should be sufficiently convincing in order to remove cables in an industrial setting or provide remote co...

  17. Models for assessing the relative phase velocity in a two-phase flow. Status report

    International Nuclear Information System (INIS)

    Schaffrath, A.; Ringel, H.

    2000-06-01

    The knowledge of slip or drift flux in two phase flow is necessary for several technical processes (e.g. two phase pressure losses, heat and mass transfer in steam generators and condensers, dwell period in chemical reactors, moderation effectiveness of two phase coolant in BWR). In the following the most important models for two phase flow with different phase velocities (e.g. slip or drift models, analogy between pressure loss and steam quality, ε - ε models and models for the calculation of void distribution in reposing fluids) are classified, described and worked up for a further comparison with own experimental data. (orig.)

  18. Modeling Atmospheric Turbulence via Rapid Distortion Theory: Spectral Tensor of Velocity and Buoyancy

    DEFF Research Database (Denmark)

    Chougule, Abhijit S.; Mann, Jakob; Kelly, Mark C.

    2017-01-01

    A spectral tensor model is presented for turbulent fluctuations of wind velocity components and temperature, assuming uniform vertical gradients in mean temperature and mean wind speed. The model is built upon rapid distortion theory (RDT) following studies by Mann and by Hanazaki and Hunt, using...... the eddy lifetime parameterization of Mann to make the model stationary. The buoyant spectral tensor model is driven via five parameters: the viscous dissipation rate epsilon, length scale of energy-containing eddies L, a turbulence anisotropy parameter Gamma, gradient Richardson number (Ri) representing...

  19. Three-dimensional models of P wave velocity and P-to-S velocity ratio in the southern central Andes by simultaneous inversion of local earthquake data

    Science.gov (United States)

    Graeber, Frank M.; Asch, Günter

    1999-09-01

    The PISCO'94 (Proyecto de Investigatión Sismológica de la Cordillera Occidental, 1994) seismological network of 31 digital broad band and short-period three-component seismometers was deployed in northern Chile between the Coastal Cordillera and the Western Cordillera. More than 5300 local seismic events were observed in a 100 day period. A subset of high-quality P and S arrival time data was used to invert simultaneously for hypocenters and velocity structure. Additional data from two other networks in the region could be included. The velocity models show a number of prominent anomalies, outlining an extremely thickened crust (about 70 km) beneath the forearc region, an anomalous crustal structure beneath the recent magmatic arc (Western Cordillera) characterized by very low velocities, and a high-velocity slab. A region of an increased Vp/Vs ratio has been found directly above the Wadati-Benioff zone, which might be caused by hydration processes. A zone of lower than average velocities and a high Vp/Vs ratio might correspond to the asthenospheric wedge. The upper edge of the Wadati-Benioff zone is sharply defined by intermediate depth hypocenters, while evidence for a double seismic zone can hardly be seen. Crustal events between the Precordillera and the Western Cordillera have been observed for the first time and are mainly located in the vicinity of the Salar de Atacama down to depths of about 40 km.

  20. Reliability of multi-model and structurally different single-model ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Yokohata, Tokuta [National Institute for Environmental Studies, Center for Global Environmental Research, Tsukuba, Ibaraki (Japan); Annan, James D.; Hargreaves, Julia C. [Japan Agency for Marine-Earth Science and Technology, Research Institute for Global Change, Yokohama, Kanagawa (Japan); Collins, Matthew [University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter (United Kingdom); Jackson, Charles S.; Tobis, Michael [The University of Texas at Austin, Institute of Geophysics, 10100 Burnet Rd., ROC-196, Mail Code R2200, Austin, TX (United States); Webb, Mark J. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-08-15

    The performance of several state-of-the-art climate model ensembles, including two multi-model ensembles (MMEs) and four structurally different (perturbed parameter) single model ensembles (SMEs), are investigated for the first time using the rank histogram approach. In this method, the reliability of a model ensemble is evaluated from the point of view of whether the observations can be regarded as being sampled from the ensemble. Our analysis reveals that, in the MMEs, the climate variables we investigated are broadly reliable on the global scale, with a tendency towards overdispersion. On the other hand, in the SMEs, the reliability differs depending on the ensemble and variable field considered. In general, the mean state and historical trend of surface air temperature, and mean state of precipitation are reliable in the SMEs. However, variables such as sea level pressure or top-of-atmosphere clear-sky shortwave radiation do not cover a sufficiently wide range in some. It is not possible to assess whether this is a fundamental feature of SMEs generated with particular model, or a consequence of the algorithm used to select and perturb the values of the parameters. As under-dispersion is a potentially more serious issue when using ensembles to make projections, we recommend the application of rank histograms to assess reliability when designing and running perturbed physics SMEs. (orig.)

  1. Models and data requirements for human reliability analysis

    International Nuclear Information System (INIS)

    1989-03-01

    It has been widely recognised for many years that the safety of the nuclear power generation depends heavily on the human factors related to plant operation. This has been confirmed by the accidents at Three Mile Island and Chernobyl. Both these cases revealed how human actions can defeat engineered safeguards and the need for special operator training to cover the possibility of unexpected plant conditions. The importance of the human factor also stands out in the analysis of abnormal events and insights from probabilistic safety assessments (PSA's), which reveal a large proportion of cases having their origin in faulty operator performance. A consultants' meeting, organized jointly by the International Atomic Energy Agency (IAEA) and the International Institute for Applied Systems Analysis (IIASA) was held at IIASA in Laxenburg, Austria, December 7-11, 1987, with the aim of reviewing existing models used in Probabilistic Safety Assessment (PSA) for Human Reliability Analysis (HRA) and of identifying the data required. The report collects both the contributions offered by the members of the Expert Task Force and the findings of the extensive discussions that took place during the meeting. Refs, figs and tabs

  2. Integrating software reliability concepts into risk and reliability modeling of digital instrumentation and control systems used in nuclear power plants

    International Nuclear Information System (INIS)

    Arndt, S. A.

    2006-01-01

    As software-based digital systems are becoming more and more common in all aspects of industrial process control, including the nuclear power industry, it is vital that the current state of the art in quality, reliability, and safety analysis be advanced to support the quantitative review of these systems. Several research groups throughout the world are working on the development and assessment of software-based digital system reliability methods and their applications in the nuclear power, aerospace, transportation, and defense industries. However, these groups are hampered by the fact that software experts and probabilistic safety assessment experts view reliability engineering very differently. This paper discusses the characteristics of a common vocabulary and modeling framework. (authors)

  3. Engineering model for low-velocity impacts of multi-material cylinder on a rigid boundary

    Directory of Open Access Journals (Sweden)

    Delvare F.

    2012-08-01

    Full Text Available Modern ballistic problems involve the impact of multi-material projectiles. In order to model the impact phenomenon, different levels of analysis can be developed: empirical, engineering and simulation models. Engineering models are important because they allow the understanding of the physical phenomenon of the impact materials. However, some simplifications can be assumed to reduce the number of variables. For example, some engineering models have been developed to approximate the behavior of single cylinders when impacts a rigid surface. However, the cylinder deformation depends of its instantaneous velocity. At this work, an analytical model is proposed for modeling the behavior of a unique cylinder composed of two different metals cylinders over a rigid surface. Material models are assumed as rigid-perfectly plastic. Differential equation systems are solved using a numerical Runge-Kutta method. Results are compared with computational simulations using AUTODYN 2D hydrocode. It was found a good agreement between engineering model and simulation results. Model is limited by the impact velocity which is transition at the interface point given by the hydro dynamical pressure proposed by Tate.

  4. A vorticity transport model to restore spatial gaps in velocity data

    Science.gov (United States)

    Ameli, Siavash; Shadden, Shawn

    2017-11-01

    Often measurements of velocity data do not have full spatial coverage in the probed domain or near boundaries. These gaps can be due to missing measurements or masked regions of corrupted data. These gaps confound interpretation, and are problematic when the data is used to compute Lagrangian or trajectory-based analyses. Various techniques have been proposed to overcome coverage limitations in velocity data such as unweighted least square fitting, empirical orthogonal function analysis, variational interpolation as well as boundary modal analysis. In this talk, we present a vorticity transport PDE to reconstruct regions of missing velocity vectors. The transport model involves both nonlinear anisotropic diffusion and advection. This approach is shown to preserve the main features of the flow even in cases of large gaps, and the reconstructed regions are continuous up to second order. We illustrate results for high-frequency radar (HFR) measurements of the ocean surface currents as this is a common application of limited coverage. We demonstrate that the error of the method is on the same order of the error of the original velocity data. In addition, we have developed a web-based gateway for data restoration, and we will demonstrate a practical application using available data. This work is supported by the NSF Grant No. 1520825.

  5. Microthrix parvicella abundance associates with activated sludge settling velocity and rheology - Quantifying and modelling filamentous bulking

    DEFF Research Database (Denmark)

    Wágner, Dorottya Sarolta; Ramin, Elham; Szabo, Peter

    2015-01-01

    The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient...... and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational...... viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through...

  6. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Directory of Open Access Journals (Sweden)

    Jin Zhu

    2012-01-01

    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  7. Modeling, implementation, and validation of arterial travel time reliability : [summary].

    Science.gov (United States)

    2013-11-01

    Travel time reliability (TTR) has been proposed as : a better measure of a facilitys performance than : a statistical measure like peak hour demand. TTR : is based on more information about average traffic : flows and longer time periods, thus inc...

  8. Modeling, implementation, and validation of arterial travel time reliability.

    Science.gov (United States)

    2013-11-01

    Previous research funded by Florida Department of Transportation (FDOT) developed a method for estimating : travel time reliability for arterials. This method was not initially implemented or validated using field data. This : project evaluated and r...

  9. Study of redundant Models in reliability prediction of HXMT's HES

    International Nuclear Information System (INIS)

    Wang Jinming; Liu Congzhan; Zhang Zhi; Ji Jianfeng

    2010-01-01

    Two redundant equipment structures of HXMT's HES are proposed firstly, the block backup and dual system cold-redundancy. Then prediction of the reliability is made by using parts count method. Research of comparison and analysis is also performed on the two proposals. A conclusion is drawn that a higher reliability and longer service life could be offered by taking a redundant equipment structure of block backup. (authors)

  10. Hindrance Velocity Model for Phase Segregation in Suspensions of Poly-dispersed Randomly Oriented Spheroids

    Science.gov (United States)

    Faroughi, S. A.; Huber, C.

    2015-12-01

    Crystal settling and bubbles migration in magmas have significant effects on the physical and chemical evolution of magmas. The rate of phase segregation is controlled by the force balance that governs the migration of particles suspended in the melt. The relative velocity of a single particle or bubble in a quiescent infinite fluid (melt) is well characterized; however, the interplay between particles or bubbles in suspensions and emulsions and its effect on their settling/rising velocity remains poorly quantified. We propose a theoretical model for the hindered velocity of non-Brownian emulsions of nondeformable droplets, and suspensions of spherical solid particles in the creeping flow regime. The model is based on three sets of hydrodynamic corrections: two on the drag coefficient experienced by each particle to account for both return flow and Smoluchowski effects and a correction on the mixture rheology to account for nonlocal interactions between particles. The model is then extended for mono-disperse non-spherical solid particles that are randomly oriented. The non-spherical particles are idealized as spheroids and characterized by their aspect ratio. The poly-disperse nature of natural suspensions is then taken into consideration by introducing an effective volume fraction of particles for each class of mono-disperse particles sizes. Our model is tested against new and published experimental data over a wide range of particle volume fraction and viscosity ratios between the constituents of dispersions. We find an excellent agreement between our model and experiments. We also show two significant applications for our model: (1) We demonstrate that hindered settling can increase mineral residence time by up to an order of magnitude in convecting magma chambers. (2) We provide a model to correct for particle interactions in the conventional hydrometer test to estimate the particle size distribution in soils. Our model offers a greatly improved agreement with

  11. The Three-Dimensional Velocity Distribution of Wide Gap Taylor-Couette Flow Modelled by CFD

    Directory of Open Access Journals (Sweden)

    David Shina Adebayo

    2016-01-01

    Full Text Available A numerical investigation is conducted for the flow between two concentric cylinders with a wide gap, relevant to bearing chamber applications. This wide gap configuration has received comparatively less attention than narrow gap journal bearing type geometries. The flow in the gap between an inner rotating cylinder and an outer stationary cylinder has been modelled as an incompressible flow using an implicit finite volume RANS scheme with the realisable k-ε model. The model flow is above the critical Taylor number at which axisymmetric counterrotating Taylor vortices are formed. The tangential velocity profiles at all axial locations are different from typical journal bearing applications, where the velocity profiles are quasilinear. The predicted results led to two significant findings of impact in rotating machinery operations. Firstly, the axial variation of the tangential velocity gradient induces an axially varying shear stress, resulting in local bands of enhanced work input to the working fluid. This is likely to cause unwanted heat transfer on the surface in high torque turbomachinery applications. Secondly, the radial inflow at the axial end-wall boundaries is likely to promote the transport of debris to the junction between the end-collar and the rotating cylinder, causing the build-up of fouling in the seal.

  12. Analytical study on the criticality of the stochastic optimal velocity model

    International Nuclear Information System (INIS)

    Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2006-01-01

    In recent works, we have proposed a stochastic cellular automaton model of traffic flow connecting two exactly solvable stochastic processes, i.e., the asymmetric simple exclusion process and the zero range process, with an additional parameter. It is also regarded as an extended version of the optimal velocity model, and moreover it shows particularly notable properties. In this paper, we report that when taking optimal velocity function to be a step function, all of the flux-density graph (i.e. the fundamental diagram) can be estimated. We first find that the fundamental diagram consists of two line segments resembling an inversed-λ form, and next identify their end-points from a microscopic behaviour of vehicles. It is notable that by using a microscopic parameter which indicates a driver's sensitivity to the traffic situation, we give an explicit formula for the critical point at which a traffic jam phase arises. We also compare these analytical results with those of the optimal velocity model, and point out the crucial differences between them

  13. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  14. Crustal and mantle velocity models of southern Tibet from finite frequency tomography

    Science.gov (United States)

    Liang, Xiaofeng; Shen, Yang; Chen, Yongshun John; Ren, Yong

    2011-02-01

    Using traveltimes of teleseismic body waves recorded by several temporary local seismic arrays, we carried out finite-frequency tomographic inversions to image the three-dimensional velocity structure beneath southern Tibet to examine the roles of the upper mantle in the formation of the Tibetan Plateau. The results reveal a region of relatively high P and S wave velocity anomalies extending from the uppermost mantle to at least 200 km depth beneath the Higher Himalaya. We interpret this high-velocity anomaly as the underthrusting Indian mantle lithosphere. There is a strong low P and S wave velocity anomaly that extends from the lower crust to at least 200 km depth beneath the Yadong-Gulu rift, suggesting that rifting in southern Tibet is probably a process that involves the entire lithosphere. Intermediate-depth earthquakes in southern Tibet are located at the top of an anomalous feature in the mantle with a low Vp, a high Vs, and a low Vp/Vs ratio. One possible explanation for this unusual velocity anomaly is the ongoing granulite-eclogite transformation. Together with the compressional stress from the collision, eclogitization and the associated negative buoyancy force offer a plausible mechanism that causes the subduction of the Indian mantle lithosphere beneath the Higher Himalaya. Our tomographic model and the observation of north-dipping lineations in the upper mantle suggest that the Indian mantle lithosphere has been broken laterally in the direction perpendicular to the convergence beneath the north-south trending rifts and subducted in a progressive, piecewise and subparallel fashion with the current one beneath the Higher Himalaya.

  15. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  16. 78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard

    Science.gov (United States)

    2013-07-29

    ...; Order No. 782] Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy... Analysis (MOD) Reliability Standard MOD- 028-2, submitted to the Commission for approval by the North... Organization. The Commission finds that the proposed Reliability Standard represents an improvement over the...

  17. Computer Model to Estimate Reliability Engineering for Air Conditioning Systems

    International Nuclear Information System (INIS)

    Afrah Al-Bossly, A.; El-Berry, A.; El-Berry, A.

    2012-01-01

    Reliability engineering is used to predict the performance and optimize design and maintenance of air conditioning systems. Air conditioning systems are expose to a number of failures. The failures of an air conditioner such as turn on, loss of air conditioner cooling capacity, reduced air conditioning output temperatures, loss of cool air supply and loss of air flow entirely can be due to a variety of problems with one or more components of an air conditioner or air conditioning system. Forecasting for system failure rates are very important for maintenance. This paper focused on the reliability of the air conditioning systems. Statistical distributions that were commonly applied in reliability settings: the standard (2 parameter) Weibull and Gamma distributions. After distributions parameters had been estimated, reliability estimations and predictions were used for evaluations. To evaluate good operating condition in a building, the reliability of the air conditioning system that supplies conditioned air to the several The company's departments. This air conditioning system is divided into two, namely the main chilled water system and the ten air handling systems that serves the ten departments. In a chilled-water system the air conditioner cools water down to 40-45 degree F (4-7 degree C). The chilled water is distributed throughout the building in a piping system and connected to air condition cooling units wherever needed. Data analysis has been done with support a computer aided reliability software, this is due to the Weibull and Gamma distributions indicated that the reliability for the systems equal to 86.012% and 77.7% respectively. A comparison between the two important families of distribution functions, namely, the Weibull and Gamma families was studied. It was found that Weibull method performed for decision making.

  18. Possibilities and limitations of applying software reliability growth models to safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2007-01-01

    It is generally known that software reliability growth models such as the Jelinski-Moranda model and the Goel-Okumoto's Non-Homogeneous Poisson Process (NHPP) model cannot be applied to safety-critical software due to a lack of software failure data. In this paper, by applying two of the most widely known software reliability growth models to sample software failure data, we demonstrate the possibility of using the software reliability growth models to prove the high reliability of safety-critical software. The high sensitivity of a piece of software's reliability to software failure data, as well as a lack of sufficient software failure data, is also identified as a possible limitation when applying the software reliability growth models to safety-critical software

  19. Low-velocity Impact Response of a Nanocomposite Beam Using an Analytical Model

    Directory of Open Access Journals (Sweden)

    Mahdi Heydari Meybodi

    Full Text Available AbstractLow-velocity impact of a nanocomposite beam made of glass/epoxy reinforced with multi-wall carbon nanotubes and clay nanoparticles is investigated in this study. Exerting modified rule of mixture (MROM, the mechanical properties of nanocomposite including matrix, nanoparticles or multi-wall carbon nanotubes (MWCNT, and fiber are attained. In order to analyze the low-velocity impact, Euler-Bernoulli beam theory and Hertz's contact law are simultaneously employed to govern the equations of motion. Using Ritz's variational approximation method, a set of nonlinear equations in time domain are obtained, which are solved using a fourth order Runge-Kutta method. The effect of different parameters such as adding nanoparticles or MWCNT's on maximum contact force and energy absorption, stacking sequence, geometrical dimensions (i.e., length, width and height, and initial velocity of the impactor have been studied comprehensively on dynamic behavior of the nanocomposite beam. In addition, the result of analytical model is compared with Finite Element Modeling (FEM.The results reveal that the effect of nanoparticles on energy absorption is more considerable at higher impact energies.

  20. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  1. Velocity statistics for interacting edge dislocations in one dimension from Dyson's Coulomb gas model.

    Science.gov (United States)

    Jafarpour, Farshid; Angheluta, Luiza; Goldenfeld, Nigel

    2013-10-01

    The dynamics of edge dislocations with parallel Burgers vectors, moving in the same slip plane, is mapped onto Dyson's model of a two-dimensional Coulomb gas confined in one dimension. We show that the tail distribution of the velocity of dislocations is power law in form, as a consequence of the pair interaction of nearest neighbors in one dimension. In two dimensions, we show the presence of a pairing phase transition in a system of interacting dislocations with parallel Burgers vectors. The scaling exponent of the velocity distribution at effective temperatures well below this pairing transition temperature can be derived from the nearest-neighbor interaction, while near the transition temperature, the distribution deviates from the form predicted by the nearest-neighbor interaction, suggesting the presence of collective effects.

  2. High-velocity two-phase flow two-dimensional modeling

    International Nuclear Information System (INIS)

    Mathes, R.; Alemany, A.; Thilbault, J.P.

    1995-01-01

    The two-phase flow in the nozzle of a LMMHD (liquid metal magnetohydrodynamic) converter has been studied numerically and experimentally. A two-dimensional model for two-phase flow has been developed including the viscous terms (dragging and turbulence) and the interfacial mass, momentum and energy transfer between the phases. The numerical results were obtained by a finite volume method based on the SIMPLE algorithm. They have been verified by an experimental facility using air-water as a simulation pair and a phase Doppler particle analyzer for velocity and droplet size measurement. The numerical simulation of a lithium-cesium high-temperature pair showed that a nearly homogeneous and isothermal expansion of the two phases is possible with small pressure losses and high kinetic efficiencies. In the throat region a careful profiling is necessary to reduce the inertial effects on the liquid velocity field

  3. The Dynamics of M15: Observations of the Velocity Dispersion Profile and Fokker-Planck Models

    Science.gov (United States)

    Dull, J. D.; Cohn, H. N.; Lugger, P. M.; Murphy, B. W.; Seitzer, P. O.; Callanan, P. J.; Rutten, R. G. M.; Charles, P. A.

    1997-05-01

    We report a new measurement of the velocity dispersion profile within 1' (3 pc) of the center of the globular cluster M15 (NGC 7078), using long-slit spectra from the 4.2 m William Herschel Telescope at La Palma Observatory. We obtained spatially resolved spectra for a total of 23 slit positions during two observing runs. During each run, a set of parallel slit positions was used to map out the central region of the cluster; the position angle used during the second run was orthogonal to that used for the first. The spectra are centered in wavelength near the Ca II infrared triplet at 8650 Å, with a spectral range of about 450 Å. We determined radial velocities by cross-correlation techniques for 131 cluster members. A total of 32 stars were observed more than once. Internal and external comparisons indicate a velocity accuracy of about 4 km s-1. The velocity dispersion profile rises from about σ = 7.2 +/- 1.4 km s-1 near 1' from the center of the cluster to σ = 13.9 +/- 1.8 km s-1 at 20". Inside of 20", the dispersion remains approximately constant at about 10.2 +/- 1.4 km s-1 with no evidence for a sharp rise near the center. This last result stands in contrast with that of Peterson, Seitzer, & Cudworth who found a central velocity dispersion of 25 +/- 7 km s-1, based on a line-broadening measurement. Our velocity dispersion profile is in good agreement with those determined in the recent studies of Gebhardt et al. and Dubath & Meylan. We have developed a new set of Fokker-Planck models and have fitted these to the surface brightness and velocity dispersion profiles of M15. We also use the two measured millisecond pulsar accelerations as constraints. The best-fitting model has a mass function slope of x = 0.9 (where 1.35 is the slope of the Salpeter mass function) and a total mass of 4.9 × 105 M⊙. This model contains approximately 104 neutron stars (3% of the total mass), the majority of which lie within 6" (0.2 pc) of the cluster center. Since the

  4. Dry deposition models for radionuclides dispersed in air: a new approach for deposition velocity evaluation schema

    Science.gov (United States)

    Giardina, M.; Buffa, P.; Cervone, A.; De Rosa, F.; Lombardo, C.; Casamirra, M.

    2017-11-01

    In the framework of a National Research Program funded by the Italian Minister of Economic Development, the Department of Energy, Information Engineering and Mathematical Models (DEIM) of Palermo University and ENEA Research Centre of Bologna, Italy are performing several research activities to study physical models and mathematical approaches aimed at investigating dry deposition mechanisms of radioactive pollutants. On the basis of such studies, a new approach to evaluate the dry deposition velocity for particles is proposed. Comparisons with some literature experimental data show that the proposed dry deposition scheme can capture the main phenomena involved in the dry deposition process successfully.

  5. Spectral analysis of surface waves method to assess shear wave velocity within centrifuge models

    Science.gov (United States)

    Murillo, Carol Andrea; Thorel, Luc; Caicedo, Bernardo

    2009-06-01

    The method of the spectral analysis of surface waves (SASW) is tested out on reduced scale centrifuge models, with a specific device, called the mini Falling Weight, developed for this purpose. Tests are performed on layered materials made of a mixture of sand and clay. The shear wave velocity VS determined within the models using the SASW is compared with the laboratory measurements carried out using the bender element test. The results show that the SASW technique applied to centrifuge testing is a relevant method to characterize VS near the surface.

  6. Multiple Model Adaptive Attitude Control of LEO Satellite with Angular Velocity Constraints

    Science.gov (United States)

    Shahrooei, Abolfazl; Kazemi, Mohammad Hosein

    2018-04-01

    In this paper, the multiple model adaptive control is utilized to improve the transient response of attitude control system for a rigid spacecraft. An adaptive output feedback control law is proposed for attitude control under angular velocity constraints and its almost global asymptotic stability is proved. The multiple model adaptive control approach is employed to counteract large uncertainty in parameter space of the inertia matrix. The nonlinear dynamics of a low earth orbit satellite is simulated and the proposed control algorithm is implemented. The reported results show the effectiveness of the suggested scheme.

  7. Critique of the use of deposition velocity in modeling indoor air quality

    International Nuclear Information System (INIS)

    Nazaroff, W.W.; Weschler, C.J.

    1993-01-01

    Among the potential fates of indoor air pollutants are a variety of physical and chemical interactions with indoor surfaces. In deterministic mathematical models of indoor air quality, these interactions are usually represented as a first-order loss process, with the loss rate coefficient given as the product of the surface-to-volume ratio of the room times a deposition velocity. In this paper, the validity of this representation of surface-loss mechanisms is critically evaluated. From a theoretical perspective, the idea of a deposition velocity is consistent with the following representation of an indoor air environment. Pollutants are well-mixed throughout a core region which is separated from room surfaces by boundary layers. Pollutants migrate through the boundary layers by a combination of diffusion (random motion resulting from collisions with surrounding gas molecules), advection (transport by net motion of the fluid), and, in some cases, other transport mechanisms. The rate of pollutant loss to a surface is governed by a combination of the rate of transport through the boundary layer and the rate of reaction at the surface. The deposition velocity expresses the pollutant flux density (mass or moles deposited per area per time) to the surface divided by the pollutant concentration in the core region. This concept has substantial value to the extent that the flux density is proportional to core concentration. Published results from experimental and modeling studies of fine particles, radon decay products, ozone, and nitrogen oxides are used as illustrations of both the strengths and weaknesses of deposition velocity as a parameter to indicate the rate of indoor air pollutant loss on surfaces. 66 refs., 5 tabs

  8. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    The objective of this project is to develop a method to predict the potential reliability of software to be used in a digital system instrumentation and control system. The reliability prediction is to make use of existing measures of software reliability such as those described in IEEE Std 982 and 982.2. This prediction must be of sufficient accuracy to provide a value for uncertainty that could be used in a nuclear power plant probabilistic risk assessment (PRA). For the purposes of the project, reliability was defined to be the probability that the digital system will successfully perform its intended safety function (for the distribution of conditions under which it is expected to respond) upon demand with no unintended functions that might affect system safety. The ultimate objective is to use the identified measures to develop a method for predicting the potential quantitative reliability of a digital system. The reliability prediction models proposed in this report are conceptual in nature. That is, possible prediction techniques are proposed and trial models are built, but in order to become a useful tool for predicting reliability, the models must be tested, modified according to the results, and validated. Using methods outlined by this project, models could be constructed to develop reliability estimates for elements of software systems. This would require careful review and refinement of the models, development of model parameters from actual experience data or expert elicitation, and careful validation. By combining these reliability estimates (generated from the validated models for the constituent parts) in structural software models, the reliability of the software system could then be predicted. Modeling digital system reliability will also require that methods be developed for combining reliability estimates for hardware and software. System structural models must also be developed in order to predict system reliability based upon the reliability

  9. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    Energy Technology Data Exchange (ETDEWEB)

    Wardaya, P. D., E-mail: pongga.wardaya@utp.edu.my; Noh, K. A. B. M., E-mail: pongga.wardaya@utp.edu.my; Yusoff, W. I. B. W., E-mail: pongga.wardaya@utp.edu.my [Petroleum Geosciences Department, Universiti Teknologi PETRONAS, Tronoh, Perak, 31750 (Malaysia); Ridha, S. [Petroleum Engineering Department, Universiti Teknologi PETRONAS, Tronoh, Perak, 31750 (Malaysia); Nurhandoko, B. E. B. [Wave Inversion and Subsurface Fluid Imaging Research Laboratory (WISFIR), Dept. of Physics, Institute of Technology Bandung, Bandung, Indonesia and Rock Fluid Imaging Lab, Bandung (Indonesia)

    2014-09-25

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic

  10. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    International Nuclear Information System (INIS)

    Wardaya, P. D.; Noh, K. A. B. M.; Yusoff, W. I. B. W.; Ridha, S.; Nurhandoko, B. E. B.

    2014-01-01

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave

  11. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  12. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.

    2008-01-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  13. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    Science.gov (United States)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  14. Acoustic Velocity and Attenuation in Magnetorhelogical fluids based on an effective density fluid model

    Directory of Open Access Journals (Sweden)

    Shen Min

    2016-01-01

    Full Text Available Magnetrohelogical fluids (MRFs represent a class of smart materials whose rheological properties change in response to the magnetic field, which resulting in the drastic change of the acoustic impedance. This paper presents an acoustic propagation model that approximates a fluid-saturated porous medium as a fluid with a bulk modulus and effective density (EDFM to study the acoustic propagation in the MRF materials under magnetic field. The effective density fluid model derived from the Biot’s theory. Some minor changes to the theory had to be applied, modeling both fluid-like and solid-like state of the MRF material. The attenuation and velocity variation of the MRF are numerical calculated. The calculated results show that for the MRF material the attenuation and velocity predicted with this effective density fluid model are close agreement with the previous predictions by Biot’s theory. We demonstrate that for the MRF material acoustic prediction the effective density fluid model is an accurate alternative to full Biot’s theory and is much simpler to implement.

  15. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    Directory of Open Access Journals (Sweden)

    Kaijuan Yuan

    2016-01-01

    Full Text Available Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  16. A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Pejcha, Ondřej [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08540 (United States); Prieto, Jose L., E-mail: pejcha@astro.princeton.edu [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército 441 Santiago (Chile)

    2015-02-01

    We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles result in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.

  17. Uncertainty estimation of the velocity model for the TrigNet GPS network

    Science.gov (United States)

    Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard

    2010-05-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.

  18. Velocity-mass correlation of the O-type stars: model results

    International Nuclear Information System (INIS)

    Stone, R.C.

    1982-01-01

    This paper presents new model results describing the evolution of massive close binaries from their initial ZAMS to post-supernova stages. Unlike the previous conservative study by Stone [Astrophys. J. 232, 520 (1979) (Paper II)], these results allow explicitly for mass loss from the binary system occurring during the core hydrogen- and helium-burning stages of the primary binary star as well as during the Roche lobe overflow. Because of uncertainties in these rates, model results are given for several reasonable choices for these rates. All of the models consistently predict an increasing relation between the peculiar space velocities and masses for runaway OB stars which agrees well with the observed correlations discussed in Stone [Astron. J. 86, 544 (1981) (Paper III)] and also predict a lower limit at Mroughly-equal11M/sub sun/ for the masses of runaway stars, in agreement with the observational limit found by A. Blaauw (Bull. Astron. Inst. Neth. 15, 265, 1961), both of which support the binary-supernova scenario described by van den Heuvel and Heise for the origin of runaway stars. These models also predict that the more massive O stars will produce correspondingly more massive compact remnants, and that most binaries experiencing supernova-induced kick velocities of magnitude V/sub k/> or approx. =300 km s -1 will disrupt following the explosions. The best estimate for this velocity as established from pulsar observations is V/sub k/roughly-equal150 km s -1 , in which case probably only 15% if these binaries will be disrupted by the supernova explosions, and therefore, almost all runaway stars should have either neutron star or black hole companions

  19. A fifth equation to model the relative velocity the 3-D thermal-hydraulic code THYC

    International Nuclear Information System (INIS)

    Jouhanique, T.; Rascle, P.

    1995-11-01

    E.D.F. has developed, since 1986, a general purpose code named THYC (Thermal HYdraulic Code) designed to study three-dimensional single and two-phase flows in rod tube bundles (pressurised water reactor cores, steam generators, condensers, heat exchangers). In these studies, the relative velocity was calculated by a drift-flux correlation. However, the relative velocity between vapor and liquid is an important parameter for the accuracy of a two-phase flow modelling in a three-dimensional code. The range of application of drift-flux correlations is mainly limited by the characteristic of the flow pattern (counter current flow ...) and by large 3-D effects. The purpose of this paper is to describe a numerical scheme which allows the relative velocity to be computed in a general case. Only the methodology is investigated in this paper which is not a validation work. The interfacial drag force is an important factor of stability and accuracy of the results. This force, closely dependent on the flow pattern, is not entirely established yet, so a range of multiplicator of its expression is used to compare the numerical results with the VATICAN test section measurements. (authors). 13 refs., 6 figs

  20. An analytical model for displacement velocity of liquid film on a hot vertical surface

    International Nuclear Information System (INIS)

    Yoshioka, Keisuke; Hasegawa, Shu

    1975-01-01

    The downward progress of the advancing front of a liquid film streaming down a heated vertical surface, as it would occur in emergency core cooling, is much slower than in the case of ordinary streaming down along a heated surface already wetted with the liquid. A two-dimensional heat conduction model is developed for evaluating this velocity of the liquid front, which takes account of the heat removal by ordinary flow boiling mechanism. In the analysis, the maximum heat flux and the calefaction temperature are taken up as parameters in addition to the initial dry heated wall temperature, the flow rate and the velocity of downward progress of the liquid front. The temperature profile is calculated for various combinations of these parameters. Two criteria are proposed for choosing the most suitable combination of the parameters. One is to reject solutions that represent an oscillating wall temperature distribution, and the second criterion requires that the length of the zone of violent boiling immediately following the liquid front should not be longer than about 1 mm, this value being determined from comparisons made between experiment and calculation. Application of the above two criteria resulted in reasonable values obtained for the calefaction temperature and the maximum heat flux, and the velocity of the liquid front derived therefrom showed good agreement with experiment. (auth.)

  1. Simulation of High Velocity Impact on Composite Structures - Model Implementation and Validation

    Science.gov (United States)

    Schueler, Dominik; Toso-Pentecôte, Nathalie; Voggenreiter, Heinz

    2016-08-01

    High velocity impact on composite aircraft structures leads to the formation of flexural waves that can cause severe damage to the structure. Damage and failure can occur within the plies and/or in the resin rich interface layers between adjacent plies. In the present paper a modelling methodology is documented that captures intra- and inter-laminar damage and their interrelations by use of shell element layers representing sub-laminates that are connected with cohesive interface layers to simulate delamination. This approach allows the simulation of large structures while still capturing the governing damage mechanisms and their interactions. The paper describes numerical algorithms for the implementation of a Ladevèze continuum damage model for the ply and methods to derive input parameters for the cohesive zone model. By comparison with experimental results from gas gun impact tests the potential and limitations of the modelling approach are discussed.

  2. Lithospheric structure of the Arabian Shield and Platform from complete regional waveform modelling and surface wave group velocities

    Science.gov (United States)

    Rodgers, Arthur J.; Walter, William R.; Mellors, Robert J.; Al-Amri, Abdullah M. S.; Zhang, Yu-Shen

    1999-09-01

    Regional seismic waveforms reveal significant differences in the structure of the Arabian Shield and the Arabian Platform. We estimate lithospheric velocity structure by modelling regional waveforms recorded by the 1995-1997 Saudi Arabian Temporary Broadband Deployment using a grid search scheme. We employ a new method whereby we narrow the waveform modelling grid search by first fitting the fundamental mode Love and Rayleigh wave group velocities. The group velocities constrain the average crustal thickness and velocities as well as the crustal velocity gradients. Because the group velocity fitting is computationally much faster than the synthetic seismogram calculation this method allows us to determine good average starting models quickly. Waveform fits of the Pn and Sn body wave arrivals constrain the mantle velocities. The resulting lithospheric structures indicate that the Arabian Platform has an average crustal thickness of 40 km, with relatively low crustal velocities (average crustal P- and S-wave velocities of 6.07 and 3.50 km s^-1 , respectively) without a strong velocity gradient. The Moho is shallower (36 km) and crustal velocities are 6 per cent higher (with a velocity increase with depth) for the Arabian Shield. Fast crustal velocities of the Arabian Shield result from a predominantly mafic composition in the lower crust. Lower velocities in the Arabian Platform crust indicate a bulk felsic composition, consistent with orogenesis of this former active margin. P- and S-wave velocities immediately below the Moho are slower in the Arabian Shield than in the Arabian Platform (7.9 and 4.30 km s^-1 , and 8.10 and 4.55 km s^-1 , respectively). This indicates that the Poisson's ratios for the uppermost mantle of the Arabian Shield and Platform are 0.29 and 0.27, respectively. The lower mantle velocities and higher Poisson's ratio beneath the Arabian Shield probably arise from a partially molten mantle associated with Red Sea spreading and continental

  3. Softverski model estimatora radijalne brzine ciljeva / Software model of a radial velocity estimator

    Directory of Open Access Journals (Sweden)

    Dejan S. Ivković

    2010-04-01

    Full Text Available U radu je softverski modelovan novi blok u delu za obradu signala softverskog radarskog prijemnika, koji je nazvan estimator radijalne brzine. Detaljno je opisan način procene Doplerove frekvencije na osnovu MUSIC algoritma i ukratko prikazan način rada pri merenju. Svi parametri pri merenju klatera i detekcije simuliranih i realnih ciljeva dati su tabelarno, a rezultati grafički. Na osnovu analize prikazanih rezultata može se zaključiti da se pomoću projektovanog estimatora radijalne brzine može precizno proceniti Doplerov pomak u reflektovanom signalu od pokretnog cilja, a samim tim može se precizno odrediti njegova brzina. / In all analyses the MUSIC method has given better results than the FFT method. The MUSIC method proved to be better at estimation precision as well as at resolving two adjacent Doppler frequencies. On the basis of the obtained results, the designed estimator of radial velocity can be said to estimate Doppler frequency in the reflected signal from a moving target precisely, and, consequently, the target velocity. It is thus possible to improve the performances of the current radar as far as a precise estimation of velocity of detected moving targets is concerned.

  4. Modeling of liquid ceramic precursor droplets in a high velocity oxy-fuel flame jet

    International Nuclear Information System (INIS)

    Basu, Saptarshi; Cetegen, Baki M.

    2008-01-01

    Production of coatings by high velocity oxy-fuel (HVOF) flame jet processing of liquid precursor droplets can be an attractive alternative method to plasma processing. This article concerns modeling of the thermophysical processes in liquid ceramic precursor droplets injected into an HVOF flame jet. The model consists of several sub-models that include aerodynamic droplet break-up, heat and mass transfer within individual droplets exposed to the HVOF environment and precipitation of ceramic precursors. A parametric study is presented for the initial droplet size, concentration of the dissolved salts and the external temperature and velocity field of the HVOF jet to explore processing conditions and injection parameters that lead to different precipitate morphologies. It is found that the high velocity of the jet induces shear break-up into several μm diameter droplets. This leads to better entrainment and rapid heat-up in the HVOF jet. Upon processing, small droplets (<5 μm) are predicted to undergo volumetric precipitation and form solid particles prior to impact at the deposit location. Droplets larger than 5 μm are predicted to form hollow or precursor containing shells similar to those processed in a DC arc plasma. However, it is found that the lower temperature of the HVOF jet compared to plasma results in slower vaporization and solute mass diffusion time inside the droplet, leading to comparatively thicker shells. These shell-type morphologies may further experience internal pressurization, resulting in possibly shattering and secondary atomization of the trapped liquid. The consequences of these different particle states on the coating microstructure are also discussed in this article

  5. SIERRA - A 3-D device simulator for reliability modeling

    Science.gov (United States)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  6. Modeling of seismic hazards for dynamic reliability analysis

    International Nuclear Information System (INIS)

    Mizutani, M.; Fukushima, S.; Akao, Y.; Katukura, H.

    1993-01-01

    This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)

  7. Sterile Neutrinos, Dark Matter, and Pulsar Velocities in Models with a Higgs Singlet

    International Nuclear Information System (INIS)

    Kusenko, Alexander

    2006-01-01

    We identify the range of parameters for which the sterile neutrinos can simultaneously explain the cosmological dark matter and the observed velocities of pulsars. To satisfy all cosmological bounds, the relic sterile neutrinos must be produced sufficiently cold. This is possible in a class of models with a gauge-singlet Higgs boson coupled to the neutrinos. Sterile dark matter can be detected by the x-ray telescopes. The presence of the singlet in the Higgs sector can be tested at the CERN Large Hadron Collider

  8. Synchronous Surface Pressure and Velocity Measurements of standard model in hypersonic flow

    Directory of Open Access Journals (Sweden)

    Zhijun Sun

    2018-01-01

    Full Text Available Experiments in the Hypersonic Wind tunnel of NUAA(NHW present synchronous measurements of bow shockwave and surface pressure of a standard blunt rotary model (AGARD HB-2, which was carried out in order to measure the Mach-5-flow above a blunt body by PIV (Particle Image Velocimetry as well as unsteady pressure around the rotary body. Titanium dioxide (Al2O3 Nano particles were seeded into the flow by a tailor-made container. With meticulous care designed optical path, the laser was guided into the vacuum experimental section. The transient pressure was obtained around model by using fast-responding pressure-sensitive paint (PSPsprayed on the model. All the experimental facilities were controlled by Series Pulse Generator to ensure that the data was time related. The PIV measurements of velocities in front of the detached bow shock agreed very well with the calculated value, with less than 3% difference compared to Pitot-pressure recordings. The velocity gradient contour described in accord with the detached bow shock that showed on schlieren. The PSP results presented good agreement with the reference data from previous studies. Our work involving studies of synchronous shock-wave and pressure measurements proved to be encouraging.

  9. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  10. Development of a Duplex Ultrasound Simulator and Preliminary Validation of Velocity Measurements in Carotid Artery Models.

    Science.gov (United States)

    Zierler, R Eugene; Leotta, Daniel F; Sansom, Kurt; Aliseda, Alberto; Anderson, Mark D; Sheehan, Florence H

    2016-07-01

    Duplex ultrasound scanning with B-mode imaging and both color Doppler and Doppler spectral waveforms is relied upon for diagnosis of vascular pathology and selection of patients for further evaluation and treatment. In most duplex ultrasound applications, classification of disease severity is based primarily on alterations in blood flow velocities, particularly the peak systolic velocity (PSV) obtained from Doppler spectral waveforms. We developed a duplex ultrasound simulator for training and assessment of scanning skills. Duplex ultrasound cases were prepared from 2-dimensional (2D) images of normal and stenotic carotid arteries by reconstructing the common carotid, internal carotid, and external carotid arteries in 3 dimensions and computationally simulating blood flow velocity fields within the lumen. The simulator displays a 2D B-mode image corresponding to transducer position on a mannequin, overlaid by color coding of velocity data. A spectral waveform is generated according to examiner-defined settings (depth and size of the Doppler sample volume, beam steering, Doppler beam angle, and pulse repetition frequency or scale). The accuracy of the simulator was assessed by comparing the PSV measured from the spectral waveforms with the true PSV which was derived from the computational flow model based on the size and location of the sample volume within the artery. Three expert examiners made a total of 36 carotid artery PSV measurements based on the simulated cases. The PSV measured by the examiners deviated from true PSV by 8% ± 5% (N = 36). The deviation in PSV did not differ significantly between artery segments, normal and stenotic arteries, or examiners. To our knowledge, this is the first simulation of duplex ultrasound that can create and display real-time color Doppler images and Doppler spectral waveforms. The results demonstrate that an examiner can measure PSV from the spectral waveforms using the settings on the simulator with a mean absolute error

  11. Remote Sensing Data in Wind Velocity Field Modelling: a Case Study from the Sudetes (SW Poland)

    Science.gov (United States)

    Jancewicz, Kacper

    2014-06-01

    The phenomena of wind-field deformation above complex (mountainous) terrain is a popular subject of research related to numerical modelling using GIS techniques. This type of modelling requires, as input data, information on terrain roughness and a digital terrain/elevation model. This information may be provided by remote sensing data. Consequently, its accuracy and spatial resolution may affect the results of modelling. This paper represents an attempt to conduct wind-field modelling in the area of the Śnieżnik Massif (Eastern Sudetes). The modelling process was conducted in WindStation 2.0.10 software (using the computable fluid dynamics solver Canyon). Two different elevation models were used: the Global Land Survey Digital Elevation Model (GLS DEM) and Digital Terrain Elevation Data (DTED) Level 2. The terrain roughness raster was generated on the basis of Corine Land Cover 2006 (CLC 2006) data. The output data were post-processed in ArcInfo 9.3.1 software to achieve a high-quality cartographic presentation. Experimental modelling was conducted for situations from 26 November 2011, 25 May 2012, and 26 May 2012, based on a limited number of field measurements and using parameters of the atmosphere boundary layer derived from the aerological surveys provided by the closest meteorological stations. The model was run in a 100-m and 250-m spatial resolution. In order to verify the model's performance, leave-one-out cross-validation was used. The calculated indices allowed for a comparison with results of former studies pertaining to WindStation's performance. The experiment demonstrated very subtle differences between results in using DTED or GLS DEM elevation data. Additionally, CLC 2006 roughness data provided more noticeable improvements in the model's performance, but only in the resolution corresponding to the original roughness data. The best input data configuration resulted in the following mean values of error measure: root mean squared error of velocity

  12. Possibilities and Limitations of Applying Software Reliability Growth Models to Safety- Critical Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2006-01-01

    As digital systems are gradually introduced to nuclear power plants (NPPs), the need of quantitatively analyzing the reliability of the digital systems is also increasing. Kang and Sung identified (1) software reliability, (2) common-cause failures (CCFs), and (3) fault coverage as the three most critical factors in the reliability analysis of digital systems. For the estimation of the safety-critical software (the software that is used in safety-critical digital systems), the use of Bayesian Belief Networks (BBNs) seems to be most widely used. The use of BBNs in reliability estimation of safety-critical software is basically a process of indirectly assigning a reliability based on various observed information and experts' opinions. When software testing results or software failure histories are available, we can use a process of directly estimating the reliability of the software using various software reliability growth models such as Jelinski- Moranda model and Goel-Okumoto's nonhomogeneous Poisson process (NHPP) model. Even though it is generally known that software reliability growth models cannot be applied to safety-critical software due to small number of expected failure data from the testing of safety-critical software, we try to find possibilities and corresponding limitations of applying software reliability growth models to safety critical software

  13. Assessing the impact of uncertainty on flood risk estimates with reliability analysis using 1-D and 2-D hydraulic models

    Directory of Open Access Journals (Sweden)

    L. Altarejos-García

    2012-07-01

    Full Text Available This paper addresses the use of reliability techniques such as Rosenblueth's Point-Estimate Method (PEM as a practical alternative to more precise Monte Carlo approaches to get estimates of the mean and variance of uncertain flood parameters water depth and velocity. These parameters define the flood severity, which is a concept used for decision-making in the context of flood risk assessment. The method proposed is particularly useful when the degree of complexity of the hydraulic models makes Monte Carlo inapplicable in terms of computing time, but when a measure of the variability of these parameters is still needed. The capacity of PEM, which is a special case of numerical quadrature based on orthogonal polynomials, to evaluate the first two moments of performance functions such as the water depth and velocity is demonstrated in the case of a single river reach using a 1-D HEC-RAS model. It is shown that in some cases, using a simple variable transformation, statistical distributions of both water depth and velocity approximate the lognormal. As this distribution is fully defined by its mean and variance, PEM can be used to define the full probability distribution function of these flood parameters and so allowing for probability estimations of flood severity. Then, an application of the method to the same river reach using a 2-D Shallow Water Equations (SWE model is performed. Flood maps of mean and standard deviation of water depth and velocity are obtained, and uncertainty in the extension of flooded areas with different severity levels is assessed. It is recognized, though, that whenever application of Monte Carlo method is practically feasible, it is a preferred approach.

  14. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  15. Sheep as a large animal ear model: Middle-ear ossicular velocities and intracochlear sound pressure.

    Science.gov (United States)

    Péus, Dominik; Dobrev, Ivo; Prochazka, Lukas; Thoele, Konrad; Dalbert, Adrian; Boss, Andreas; Newcomb, Nicolas; Probst, Rudolf; Röösli, Christof; Sim, Jae Hoon; Huber, Alexander; Pfiffner, Flurin

    2017-08-01

    Animals are frequently used for the development and testing of new hearing devices. Dimensions of the middle ear and cochlea differ significantly between humans and commonly used animals, such as rodents or cats. The sheep cochlea is anatomically more like the human cochlea in size and number of turns. This study investigated the middle-ear ossicular velocities and intracochlear sound pressure (ICSP) in sheep temporal bones, with the aim of characterizing the sheep as an experimental model for implantable hearing devices. Measurements were made on fresh sheep temporal bones. Velocity responses of the middle ear ossicles at the umbo, long process of the incus and stapes footplate were measured in the frequency range of 0.25-8 kHz using a laser Doppler vibrometer system. Results were normalized by the corresponding sound pressure level in the external ear canal (P EC ). Sequentially, ICSPs at the scala vestibuli and tympani were then recorded with custom MEMS-based hydrophones, while presenting identical acoustic stimuli. The sheep middle ear transmitted most effectively around 4.8 kHz, with a maximum stapes velocity of 0.2 mm/s/Pa. At the same frequency, the ICSP measurements in the scala vestibuli and tympani showed the maximum gain relative to the P EC (24 dB and 5 dB, respectively). The greatest pressure difference across the cochlear partition occurred between 4 and 6 kHz. A comparison between the results of this study and human reference data showed middle-ear resonance and best cochlear sensitivity at higher frequencies in sheep. In summary, sheep can be an appropriate large animal model for research and development of implantable hearing devices. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  17. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  18. Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models

    International Nuclear Information System (INIS)

    Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.

    1987-01-01

    Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented

  19. Dynamic reliability modeling of three-state networks

    OpenAIRE

    Ashrafi, S.; Asadi, M.

    2014-01-01

    This paper is an investigation into the reliability and stochastic properties of three-state networks. We consider a single-step network consisting of n links and we assume that the links are subject to failure. We assume that the network can be in three states, up (K = 2), partial performance (K = 1), and down (K = 0). Using the concept of the two-dimensional signature, we study the residual lifetimes of the networks under different scenarios on the states and the number of...

  20. Modeling the Impacts of Suspended Sediment Concentration and Current Velocity on Submersed Vegetation in an Illinois River Pool, USA

    National Research Council Canada - National Science Library

    Best, Elly

    2004-01-01

    This technical note uses a modeling approach to examine the impacts of suspended sediment concentrations and current velocity on the persistence of submersed macrophytes in a shallow aquatic system...

  1. A new car-following model for autonomous vehicles flow with mean expected velocity field

    Science.gov (United States)

    Wen-Xing, Zhu; Li-Dong, Zhang

    2018-02-01

    Due to the development of the modern scientific technology, autonomous vehicles may realize to connect with each other and share the information collected from each vehicle. An improved forward considering car-following model was proposed with mean expected velocity field to describe the autonomous vehicles flow behavior. The new model has three key parameters: adjustable sensitivity, strength factor and mean expected velocity field size. Two lemmas and one theorem were proven as criteria for judging the stability of homogeneousautonomous vehicles flow. Theoretical results show that the greater parameters means larger stability regions. A series of numerical simulations were carried out to check the stability and fundamental diagram of autonomous flow. From the numerical simulation results, the profiles, hysteresis loop and density waves of the autonomous vehicles flow were exhibited. The results show that with increased sensitivity, strength factor or field size the traffic jam was suppressed effectively which are well in accordance with the theoretical results. Moreover, the fundamental diagrams corresponding to three parameters respectively were obtained. It demonstrates that these parameters play almost the same role on traffic flux: i.e. before the critical density the bigger parameter is, the greater flux is and after the critical density, the opposite tendency is. In general, the three parameters have a great influence on the stability and jam state of the autonomous vehicles flow.

  2. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    Science.gov (United States)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of

  3. Theory model and experiment research about the cognition reliability of nuclear power plant operators

    International Nuclear Information System (INIS)

    Fang Xiang; Zhao Bingquan

    2000-01-01

    In order to improve the reliability of NPP operation, the simulation research on the reliability of nuclear power plant operators is needed. Making use of simulator of nuclear power plant as research platform, and taking the present international reliability research model-human cognition reliability for reference, the part of the model is modified according to the actual status of Chinese nuclear power plant operators and the research model of Chinese nuclear power plant operators obtained based on two-parameter Weibull distribution. Experiments about the reliability of nuclear power plant operators are carried out using the two-parameter Weibull distribution research model. Compared with those in the world, the same results are achieved. The research would be beneficial to the operation safety of nuclear power plant

  4. Stochastic modeling for reliability shocks, burn-in and heterogeneous populations

    CERN Document Server

    Finkelstein, Maxim

    2013-01-01

    Focusing on shocks modeling, burn-in and heterogeneous populations, Stochastic Modeling for Reliability naturally combines these three topics in the unified stochastic framework and presents numerous practical examples that illustrate recent theoretical findings of the authors.  The populations of manufactured items in industry are usually heterogeneous. However, the conventional reliability analysis is performed under the implicit assumption of homogeneity, which can result in distortion of the corresponding reliability indices and various misconceptions. Stochastic Modeling for Reliability fills this gap and presents the basics and further developments of reliability theory for heterogeneous populations. Specifically, the authors consider burn-in as a method of elimination of ‘weak’ items from heterogeneous populations. The real life objects are operating in a changing environment. One of the ways to model an impact of this environment is via the external shocks occurring in accordance with some stocha...

  5. Research on cognitive reliability model for main control room considering human factors in nuclear power plants

    International Nuclear Information System (INIS)

    Jiang Jianjun; Zhang Li; Wang Yiqun; Zhang Kun; Peng Yuyuan; Zhou Cheng

    2012-01-01

    Facing the shortcomings of the traditional cognitive factors and cognitive model, this paper presents a Bayesian networks cognitive reliability model by taking the main control room as a reference background and human factors as the key points. The model mainly analyzes the cognitive reliability affected by the human factors, and for the cognitive node and influence factors corresponding to cognitive node, a series of methods and function formulas to compute the node cognitive reliability is proposed. The model and corresponding methods can be applied to the evaluation of cognitive process for the nuclear power plant operators and have a certain significance for the prevention of safety accidents in nuclear power plants. (authors)

  6. Lower Mantle S-wave Velocity Model under the Western United States

    Science.gov (United States)

    Nelson, P.; Grand, S. P.

    2016-12-01

    Deep mantle plumes created by thermal instabilities at the core-mantle boundary has been an explanation for intraplate volcanism since the 1970's. Recently, broad slow velocity conduits in the lower mantle underneath some hotspots have been observed (French and Romanowicz, 2015), however the direct detection of a classical thin mantle plume using seismic tomography has remained elusive. Herein, we present a seismic tomography technique designed to image a deep mantle plume under the Yellowstone Hotspot located in the western United States utilizing SKS and SKKS waves in conjunction with finite frequency tomography. Synthetic resolution tests show the technique can resolve a 235 km diameter lower mantle plume with a 1.5% Gaussian velocity perturbation even if a realistic amount of random noise is added to the data. The Yellowstone Hotspot presents a unique opportunity to image a thin plume because it is the only hotspot with a purported deep origin that has a large enough aperture and density of seismometers to accurately sample the lower mantle at the length scales required to image a plume. Previous regional tomography studies largely based on S wave data have imaged a cylindrically shaped slow anomaly extending down to 900km under the hotspot, however they could not resolve it any deeper (Schmandt et al., 2010; Obrebski et al., 2010).To test if the anomaly extends deeper, we measured and inverted over 40,000 SKS and SKKS waves' travel times in two frequency bands recorded at 2400+ stations deployed during 2006-2012. Our preliminary model shows narrow slow velocity anomalies in the lower mantle with no fast anomalies. The slow anomalies are offset from the Yellowstone hotspot and may be diapirs rising from the base of the mantle.

  7. Temperature Field-Wind Velocity Field Optimum Control of Greenhouse Environment Based on CFD Model

    Directory of Open Access Journals (Sweden)

    Yongbo Li

    2014-01-01

    Full Text Available The computational fluid dynamics technology is applied as the environmental control model, which can include the greenhouse space. Basic environmental factors are set to be the control objects, the field information is achieved via the division of layers by height, and numerical characteristics of each layer are used to describe the field information. Under the natural ventilation condition, real-time requirements, energy consumption, and distribution difference are selected as index functions. The optimization algorithm of adaptive simulated annealing is used to obtain optimal control outputs. A comparison with full-open ventilation shows that the whole index can be reduced at 44.21% and found that a certain mutual exclusiveness exists between the temperature and velocity field in the optimal course. All the results indicate that the application of CFD model has great advantages to improve the control accuracy of greenhouse.

  8. Accurate calibration of the velocity-dependent one-scale model for domain walls

    Energy Technology Data Exchange (ETDEWEB)

    Leite, A.M.M., E-mail: up080322016@alunos.fc.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Ecole Polytechnique, 91128 Palaiseau Cedex (France); Martins, C.J.A.P., E-mail: Carlos.Martins@astro.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Shellard, E.P.S., E-mail: E.P.S.Shellard@damtp.cam.ac.uk [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2013-01-08

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048{sup 3}, and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c{sub w}=0.34{+-}0.16 and k{sub w}=0.98{+-}0.07, which are of higher precision than (but in agreement with) earlier estimates.

  9. Accurate calibration of the velocity-dependent one-scale model for domain walls

    International Nuclear Information System (INIS)

    Leite, A.M.M.; Martins, C.J.A.P.; Shellard, E.P.S.

    2013-01-01

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048 3 , and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c w =0.34±0.16 and k w =0.98±0.07, which are of higher precision than (but in agreement with) earlier estimates.

  10. Two-dimensional velocity models for paths from Pahute Mesa and Yucca Flat to Yucca Mountain

    International Nuclear Information System (INIS)

    Walck, M.C.; Phillips, J.S.

    1990-11-01

    Vertical acceleration recordings of 21 underground nuclear explosions recorded at stations at Yucca Mountain provide the data for development of three two-dimensional crystal velocity profiles for portions of the Nevada Test Site. Paths from Area 19, Area 20 (both Pahute Mesa), and Yucca Flat to Yucca Mountain have been modeled using asymptotic ray theory travel time and synthetic seismogram techniques. Significant travel time differences exist between the Yucca Flat and Pahute Mesa source areas; relative amplitude patterns at Yucca Mountain also shift with changing source azimuth. The three models, UNEPM1, UNEPM2, and UNEYF1, successfully predict the travel time and amplitude data for all three paths. 24 refs., 34 figs., 8 tabs

  11. Validating Material Modelling of OFHC Copper Using Dynamic Tensile Extrusion (DTE) Test at Different Impact Velocity

    Science.gov (United States)

    Bonora, Nicola; Testa, Gabriel; Ruggiero, Andrew; Iannitti, Gianluca; Hörnqvist, Magnus; Mortazavi, Nooshin

    2015-06-01

    In the Dynamic Tensile Extrusion (DTE) test, the material is subjected to very large strain, high strain rate and elevated temperature. Numerical simulation, validated comparing with measurements obtained on soft-recovered extruded fragments, can be used to probe material response under such extreme conditions and to assess constitutive models. In this work, the results of a parametric investigation on the simulation of DTE test of annealed OFHC copper - at impact velocity ranging from 350 up to 420 m/s - using phenomenological and physically based models (Johnson-Cook, Zerilli-Armstrong and Rusinek-Klepaczko), are presented. Preliminary simulation of microstructure evolution was performed using crystal plasticity package CPFEM, providing, as input, the strain history obtained with FEM at selected locations along the extruded fragments. Results were compared with EBSD investigation.

  12. CFD model of thermal and velocity conditions in a particular indoor environment

    Energy Technology Data Exchange (ETDEWEB)

    Mora Perez, Miguel; Lopez Patino, Gonzalo; Lopez Jimenez, P. Amparo [Hydraulic and Environmental Engineering Department, Universitat Politecnica de Valencia (Spain); Guillen Guillamon, Ignacio [Applied Physics Department, Universitat Politecnica de Valencia (Spain)

    2013-07-01

    The demand for maintaining high indoor environmental quality (IEQ) with the minimum energy consumption is rapidly increasing. In the recent years, several studies have been completed to investigate the impact of indoor environment factors on human comfort, health and energy efficiency. Therefore, the design of the thermal environment in any sort of room, specially offices, has huge economic consequences. In this paper, a particular analysis on the air temperature in a multi-task room environment is modeled, in order to represent the velocities and temperatures inside the room by using Computational Fluid Dynamics (CFD) techniques. This model will help to designers to analyze the thermal comfort regions inside the studied air volume and to visualize the whole temperatures inside the room, determining the effect of the fresh external incoming air in the internal air temperature.

  13. Model case IRS-RWE for the determination of reliability data in practical operation

    Energy Technology Data Exchange (ETDEWEB)

    Hoemke, P; Krause, H

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. The paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection are described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results.

  14. Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

    OpenAIRE

    Alaa F. Sheta; Amal Abdel-Raouf

    2016-01-01

    In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of...

  15. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  16. Influence of the pore fluid on the phase velocity in bovine trabecular bone In Vitro: Prediction of the biot model

    Science.gov (United States)

    Lee, Kang Il

    2013-01-01

    The present study aims to investigate the influence of the pore fluid on the phase velocity in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 20 marrow-filled and water-filled bovine femoral trabecular bone samples. The mean phase velocities at frequencies between 0.6 and 1.2 MHz exhibited significant negative dispersions for both the marrow-filled and the water-filled samples. The magnitudes of the dispersions showed no significant differences between the marrow-filled and the water-filled samples. In contrast, replacement of marrow by water led to a mean increase in the phase velocity of 27 m/s at frequencies from 0.6 to 1.2 MHz. The theoretical phase velocities of the fast wave predicted by using the Biot model for elastic wave propagation in fluid-saturated porous media showed good agreements with the measurements.

  17. Governing equations for a seriated continuum: an unequal velocity model for two-phase flow

    International Nuclear Information System (INIS)

    Solbrig, C.W.; Hughes, E.D.

    1975-05-01

    The description of the flow of two-phase fluids is important in many engineering devices. Unexpected transient conditions which occur in these devices cannot, in general, be treated with single-component momentum equations. Instead, the use of momentum equations for each phase is necessary in order to describe the varied transient situations which can occur. These transient conditions can include phases moving in the opposite directions, such as steam moving upward and liquid moving downward, as well as phases moving in the same direction. The derivation of continuity and momentum equations for each phase and an overall energy equation for the mixture are presented. Terms describing interphase forces are described. A seriated (series of) continuum is distinguished from an interpenetrating medium by the representation of interphase friction with velocity differences in the former and velocity gradients in the latter. The seriated continuum also considers imbedded stationary solid surfaces such as occur in nuclear reactor cores. These stationary surfaces are taken into account with source terms. Sufficient constitutive equations are presented to form a complete set of equations. Methods are presented to show that all these coefficients are determinable from microscopic models and well known experimental results. Comparison of the present deviation with previous work is also given. The equations derived here may also be employed in certain multiphase, multicomponent flow applications. (U.S.)

  18. Reliability-cost models for the power switching devices of wind power converters

    DEFF Research Database (Denmark)

    Ma, Ke; Blaabjerg, Frede

    2012-01-01

    In order to satisfy the growing reliability requirements for the wind power converters with more cost-effective solution, the target of this paper is to establish a new reliability-cost model which can connect the relationship between reliability performances and corresponding semiconductor cost...... temperature mean value Tm and fluctuation amplitude ΔTj of power devices, are presented. With the proposed reliability-cost model, it is possible to enable future reliability-oriented design of the power switching devices for wind power converters, and also an evaluation benchmark for different wind power...... for power switching devices. First the conduction loss, switching loss as well as thermal impedance models of power switching devices (IGBT module) are related to the semiconductor chip number information respectively. Afterwards simplified analytical solutions, which can directly extract the junction...

  19. Investigation of the velocity field in a full-scale model of a cerebral aneurysm

    International Nuclear Information System (INIS)

    Roloff, Christoph; Bordás, Róbert; Nickl, Rosa; Mátrai, Zsolt; Szaszák, Norbert; Szilárd, Szabó; Thévenin, Dominique

    2013-01-01

    Highlights: • We investigate flow fields inside a phantom model of a full-scale cerebral aneurysm. • An artificial blood fluid is used matching viscosity and density of real blood. • We present Particle Tracking results of fluorescent tracer particles. • Instantaneous model inlet velocity profiles and volume flow rates are derived. • Trajectory fields at three of six measurement planes are presented. -- Abstract: Due to improved and now widely used imaging methods in clinical surgery practise, detection of unruptured cerebral aneurysms becomes more and more frequent. For the selection and development of a low-risk and highly effective treatment option, the understanding of the involved hemodynamic mechanisms is of great importance. Computational Fluid Dynamics (CFD), in vivo angiographic imaging and in situ experimental investigations of flow behaviour are powerful tools which could deliver the needed information. Hence, the aim of this contribution is to experimentally characterise the flow in a full-scale phantom model of a realistic cerebral aneurysm. The acquired experimental data will then be used for a quantitative validation of companion numerical simulations. The experimental methodology relies on the large-field velocimetry technique PTV (Particle Tracking Velocimetry), processing high speed images of fluorescent tracer particles added to the flow of a blood-mimicking fluid. First, time-resolved planar PTV images were recorded at 4500 fps and processed by a complex, in-house algorithm. The resulting trajectories are used to identify Lagrangian flow structures, vortices and recirculation zones in two-dimensional measurement slices within the aneurysm sac. The instantaneous inlet velocity distribution, needed as boundary condition for the numerical simulations, has been measured with the same technique but using a higher frame rate of 20,000 fps in order to avoid ambiguous particle assignment. From this velocity distribution, the time

  20. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  1. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  2. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  3. Reliability Models Applied to a System of Power Converters in Particle Accelerators

    OpenAIRE

    Siemaszko, D; Speiser, M; Pittet, S

    2012-01-01

    Several reliability models are studied when applied to a power system containing a large number of power converters. A methodology is proposed and illustrated in the case study of a novel linear particle accelerator designed for reaching high energies. The proposed methods result in the prediction of both reliability and availability of the considered system for optimisation purposes.

  4. Shallow Crustal Structure in the Northern Salton Trough, California: Insights from a Detailed 3-D Velocity Model

    Science.gov (United States)

    Ajala, R.; Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2017-12-01

    The Coachella Valley is the northern extent of the Gulf of California-Salton Trough. It contains the southernmost segment of the San Andreas Fault (SAF) for which a magnitude 7.8 earthquake rupture was modeled to help produce earthquake planning scenarios. However, discrepancies in ground motion and travel-time estimates from the current Southern California Earthquake Center (SCEC) velocity model of the Salton Trough highlight inaccuracies in its shallow velocity structure. An improved 3-D velocity model that better defines the shallow basin structure and enables the more accurate location of earthquakes and identification of faults is therefore essential for seismic hazard studies in this area. We used recordings of 126 explosive shots from the 2011 Salton Seismic Imaging Project (SSIP) to SSIP receivers and Southern California Seismic Network (SCSN) stations. A set of 48,105 P-wave travel time picks constituted the highest-quality input to a 3-D tomographic velocity inversion. To improve the ray coverage, we added network-determined first arrivals at SCSN stations from 39,998 recently relocated local earthquakes, selected to a maximum focal depth of 10 km, to develop a detailed 3-D P-wave velocity model for the Coachella Valley with 1-km grid spacing. Our velocity model shows good resolution ( 50 rays/cubic km) down to a minimum depth of 7 km. Depth slices from the velocity model reveal several interesting features. At shallow depths ( 3 km), we observe an elongated trough of low velocity, attributed to sediments, located subparallel to and a few km SW of the SAF, and a general velocity structure that mimics the surface geology of the area. The persistence of the low-velocity sediments to 5-km depth just north of the Salton Sea suggests that the underlying basement surface, shallower to the NW, dips SE, consistent with interpretation from gravity studies (Langenheim et al., 2005). On the western side of the Coachella Valley, we detect depth-restricted regions of

  5. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  6. Reliability Modeling of Electromechanical System with Meta-Action Chain Methodology

    Directory of Open Access Journals (Sweden)

    Genbao Zhang

    2018-01-01

    Full Text Available To establish a more flexible and accurate reliability model, the reliability modeling and solving algorithm based on the meta-action chain thought are used in this thesis. Instead of estimating the reliability of the whole system only in the standard operating mode, this dissertation adopts the structure chain and the operating action chain for the system reliability modeling. The failure information and structure information for each component are integrated into the model to overcome the given factors applied in the traditional modeling. In the industrial application, there may be different operating modes for a multicomponent system. The meta-action chain methodology can estimate the system reliability under different operating modes by modeling the components with varieties of failure sensitivities. This approach has been identified by computing some electromechanical system cases. The results indicate that the process could improve the system reliability estimation. It is an effective tool to solve the reliability estimation problem in the system under various operating modes.

  7. Horizontal and Vertical Velocities Derived from the IDS Contribution to ITRF2014, and Comparisons with Geophysical Models

    Science.gov (United States)

    Moreaux, G.; Lemoine, F. G.; Argus, D. F.; Santamaria-Gomez, A.; Willis, P.; Soudarin, L.; Gravelle, M.; Ferrage, P.

    2016-01-01

    In the context of the 2014 realization of the International Terrestrial Reference Frame (ITRF2014), the International DORIS Service (IDS) has delivered to the IERS a set of 1140 weekly SINEX files including station coordinates and Earth orientation parameters, covering the time period from 1993.0 to 2015.0. From this set of weekly SINEX files, the IDS Combination Center estimated a cumulative DORIS position and velocity solution to obtain mean horizontal and vertical motion of 160 stations at 71 DORIS sites. The main objective of this study is to validate the velocities of the DORIS sites by comparison with external models or time series. Horizontal velocities are compared with two recent global plate models (GEODVEL 2010 and NNR-MORVEL56). Prior to the comparisons, DORIS horizontal velocities were corrected for Global Isostatic Adjustment (GIA) from the ICE-6G (VM5a) model. For more than half of the sites, the DORIS horizontal velocities differ from the global plate models by less than 2-3 mm/yr. For five of the sites (Arequipa, Dionysos/Gavdos, Manila, Santiago) with horizontal velocity differences wrt these models larger than 10 mm/yr, comparisons with GNSS estimates show the veracity of the DORIS motions. Vertical motions from the DORIS cumulative solution are compared with the vertical velocities derived from the latest GPS cumulative solution over the time span 1995.0-2014.0 from the University of La Rochelle (ULR6) solution at 31 co-located DORIS-GPS sites. These two sets of vertical velocities show a correlation coefficient of 0.83. Vertical differences are larger than 2 mm/yr at 23 percent of the sites. At Thule the disagreement is explained by fine-tuned DORIS discontinuities in line with the mass variations of outlet glaciers. Furthermore, the time evolution of the vertical time series from the DORIS station in Thule show similar trends to the GRACE equivalent water height.

  8. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  9. Fatigue reliability and effective turbulence models in wind farms

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Frandsen, Sten Tronæs; Tarp-Johansen, N.J.

    2007-01-01

    behind wind turbines can imply a significant reduction in the fatigue lifetime of wind turbines placed in wakes. In this paper the design code model in the wind turbine code IEC 61400-1 (2005) is evaluated from a probabilistic point of view, including the importance of modeling the SN-curve by linear...

  10. Powering stochastic reliability models by discrete event simulation

    DEFF Research Database (Denmark)

    Kozine, Igor; Wang, Xiaoyun

    2012-01-01

    it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...

  11. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  12. Kinematic Modeling of Normal Voluntary Mandibular Opening and Closing Velocity-Initial Study.

    Science.gov (United States)

    Gawriołek, Krzysztof; Gawriołek, Maria; Komosa, Marek; Piotrowski, Paweł R; Azer, Shereen S

    2015-06-01

    Determination and quantification of voluntary mandibular velocity movement has not been a thoroughly studied parameter of masticatory movement. This study attempted to objectively define kinematics of mandibular movement based on numerical (digital) analysis of the relations and interactions of velocity diagram records in healthy female individuals. Using a computerized mandibular scanner (K7 Evaluation Software), 72 diagrams of voluntary mandibular velocity movements (36 for opening, 36 for closing) for women with clinically normal motor and functional activities of the masticatory system were recorded. Multiple measurements were analyzed focusing on the curve for maximum velocity records. For each movement, the loop of temporary velocities was determined. The diagram was then entered into AutoCad calculation software where movement analysis was performed. The real maximum velocity values on opening (Vmax ), closing (V0 ), and average velocity values (Vav ) as well as movement accelerations (a) were recorded. Additionally, functional (A1-A2) and geometric (P1-P4) analysis of loop constituent phases were performed, and the relations between the obtained areas were defined. Velocity means and correlation coefficient values for various velocity phases were calculated. The Wilcoxon test produced the following maximum and average velocity results: Vmax = 394 ± 102, Vav = 222 ± 61 for opening, and Vmax = 409 ± 94, Vav = 225 ± 55 mm/s for closing. Both mandibular movement range and velocity change showed significant variability achieving the highest velocity in P2 phase. Voluntary mandibular velocity presents significant variations between healthy individuals. Maximum velocity is obtained when incisal separation is between 12.8 and 13.5 mm. An improved understanding of the patterns of normal mandibular movements may provide an invaluable diagnostic aid to pathological changes within the masticatory system. © 2014 by the American College of Prosthodontists.

  13. Wind Farm Reliability Modelling Using Bayesian Networks and Semi-Markov Processes

    Directory of Open Access Journals (Sweden)

    Robert Adam Sobolewski

    2015-09-01

    Full Text Available Technical reliability plays an important role among factors affecting the power output of a wind farm. The reliability is determined by an internal collection grid topology and reliability of its electrical components, e.g. generators, transformers, cables, switch breakers, protective relays, and busbars. A wind farm reliability’s quantitative measure can be the probability distribution of combinations of operating and failed states of the farm’s wind turbines. The operating state of a wind turbine is its ability to generate power and to transfer it to an external power grid, which means the availability of the wind turbine and other equipment necessary for the power transfer to the external grid. This measure can be used for quantitative analysis of the impact of various wind farm topologies and the reliability of individual farm components on the farm reliability, and for determining the expected farm output power with consideration of the reliability. This knowledge may be useful in an analysis of power generation reliability in power systems. The paper presents probabilistic models that quantify the wind farm reliability taking into account the above-mentioned technical factors. To formulate the reliability models Bayesian networks and semi-Markov processes were used. Using Bayesian networks the wind farm structural reliability was mapped, as well as quantitative characteristics describing equipment reliability. To determine the characteristics semi-Markov processes were used. The paper presents an example calculation of: (i probability distribution of the combination of both operating and failed states of four wind turbines included in the wind farm, and (ii expected wind farm output power with consideration of its reliability.

  14. Velocity-based movement modeling for individual and population level inference.

    Directory of Open Access Journals (Sweden)

    Ephraim M Hanks

    Full Text Available Understanding animal movement and resource selection provides important information about the ecology of the animal, but an animal's movement and behavior are not typically constant in time. We present a velocity-based approach for modeling animal movement in space and time that allows for temporal heterogeneity in an animal's response to the environment, allows for temporal irregularity in telemetry data, and accounts for the uncertainty in the location information. Population-level inference on movement patterns and resource selection can then be made through cluster analysis of the parameters related to movement and behavior. We illustrate this approach through a study of northern fur seal (Callorhinus ursinus movement in the Bering Sea, Alaska, USA. Results show sex differentiation, with female northern fur seals exhibiting stronger response to environmental variables.

  15. Cognitive modelling: a basic complement of human reliability analysis

    International Nuclear Information System (INIS)

    Bersini, U.; Cacciabue, P.C.; Mancini, G.

    1988-01-01

    In this paper the issues identified in modelling humans and machines are discussed in the perspective of the consideration of human errors managing complex plants during incidental as well as normal conditions. The dichotomy between the use of a cognitive versus a behaviouristic model approach is discussed and the complementarity aspects rather than the differences of the two methods are identified. A cognitive model based on a hierarchical goal-oriented approach and driven by fuzzy logic methodology is presented as the counterpart to the 'classical' THERP methodology for studying human errors. Such a cognitive model is discussed at length and its fundamental components, i.e. the High Level Decision Making and the Low Level Decision Making models, are reviewed. Finally, the inadequacy of the 'classical' THERP methodology to deal with cognitive errors is discussed on the basis of a simple test case. For the same case the cognitive model is then applied showing the flexibility and adequacy of the model to dynamic configuration with time-dependent failures of components and with consequent need for changing of strategy during the transient itself. (author)

  16. Efficient scattering-angle enrichment for a nonlinear inversion of the background and perturbations components of a velocity model

    KAUST Repository

    Wu, Zedong

    2017-07-04

    Reflection-waveform inversion (RWI) can help us reduce the nonlinearity of the standard full-waveform inversion (FWI) by inverting for the background velocity model using the wave-path of a single scattered wavefield to an image. However, current RWI implementations usually neglect the multi-scattered energy, which will cause some artifacts in the image and the update of the background. To improve existing RWI implementations in taking multi-scattered energy into consideration, we split the velocity model into background and perturbation components, integrate them directly in the wave equation, and formulate a new optimization problem for both components. In this case, the perturbed model is no longer a single-scattering model, but includes all scattering. Through introducing a new cheap implementation of scattering angle enrichment, the separation of the background and perturbation components can be implemented efficiently. We optimize both components simultaneously to produce updates to the velocity model that is nonlinear with respect to both the background and the perturbation. The newly introduced perturbation model can absorb the non-smooth update of the background in a more consistent way. We apply the proposed approach on the Marmousi model with data that contain frequencies starting from 5 Hz to show that this method can converge to an accurate velocity starting from a linearly increasing initial velocity. Also, our proposed method works well when applied to a field data set.

  17. Modelling seasonal meltwater forcing of the velocity of land-terminating margins of the Greenland Ice Sheet

    Science.gov (United States)

    Koziol, Conrad P.; Arnold, Neil

    2018-03-01

    Surface runoff at the margin of the Greenland Ice Sheet (GrIS) drains to the ice-sheet bed, leading to enhanced summer ice flow. Ice velocities show a pattern of early summer acceleration followed by mid-summer deceleration due to evolution of the subglacial hydrology system in response to meltwater forcing. Modelling the integrated hydrological-ice dynamics system to reproduce measured velocities at the ice margin remains a key challenge for validating the present understanding of the system and constraining the impact of increasing surface runoff rates on dynamic ice mass loss from the GrIS. Here we show that a multi-component model incorporating supraglacial, subglacial, and ice dynamic components applied to a land-terminating catchment in western Greenland produces modelled velocities which are in reasonable agreement with those observed in GPS records for three melt seasons of varying melt intensities. This provides numerical support for the hypothesis that the subglacial system develops analogously to alpine glaciers and supports recent model formulations capturing the transition between distributed and channelized states. The model shows the growth of efficient conduit-based drainage up-glacier from the ice sheet margin, which develops more extensively, and further inland, as melt intensity increases. This suggests current trends of decadal-timescale slowdown of ice velocities in the ablation zone may continue in the near future. The model results also show a strong scaling between average summer velocities and melt season intensity, particularly in the upper ablation area. Assuming winter velocities are not impacted by channelization, our model suggests an upper bound of a 25 % increase in annual surface velocities as surface melt increases to 4 × present levels.

  18. Construction of a reliable model pyranometer for irradiance ...

    African Journals Online (AJOL)

    USER

    2010-03-22

    Mar 22, 2010 ... hour, latitude and cloud cover are the most widely or commonly used ... models in the Nigerian environment include that of Burari and Sambo .... influence the stability of the assembly (reducing its phase ... earth's surface.

  19. Effects of Adaptation on Discrimination of Whisker Deflection Velocity and Angular Direction in a Model of the Barrel Cortex

    Directory of Open Access Journals (Sweden)

    Mainak J. Patel

    2018-06-01

    Full Text Available Two important stimulus features represented within the rodent barrel cortex are velocity and angular direction of whisker deflection. Each cortical barrel receives information from thalamocortical (TC cells that relay information from a single whisker, and TC input is decoded by barrel regular-spiking (RS cells through a feedforward inhibitory architecture (with inhibition delivered by cortical fast-spiking or FS cells. TC cells encode deflection velocity through population synchrony, while deflection direction is encoded through the distribution of spike counts across the TC population. Barrel RS cells encode both deflection direction and velocity with spike rate, and are divided into functional domains by direction preference. Following repetitive whisker stimulation, system adaptation causes a weakening of synaptic inputs to RS cells and diminishes RS cell spike responses, though evidence suggests that stimulus discrimination may improve following adaptation. In this work, I construct a model of the TC, FS, and RS cells comprising a single barrel system—the model incorporates realistic synaptic connectivity and dynamics and simulates both angular direction (through the spatial pattern of TC activation and velocity (through synchrony of the TC population spikes of a deflection of the primary whisker, and I use the model to examine direction and velocity selectivity of barrel RS cells before and after adaptation. I find that velocity and direction selectivity of individual RS cells (measured over multiple trials sharpens following adaptation, but stimulus discrimination using a simple linear classifier by the RS population response during a single trial (a more biologically meaningful measure than single cell discrimination over multiple trials exhibits strikingly different behavior—velocity discrimination is similar both before and after adaptation, while direction classification improves substantially following adaptation. This is the

  20. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  1. Probing dark energy models with extreme pairwise velocities of galaxy clusters from the DEUS-FUR simulations

    Science.gov (United States)

    Bouillot, Vincent R.; Alimi, Jean-Michel; Corasaniti, Pier-Stefano; Rasera, Yann

    2015-06-01

    Observations of colliding galaxy clusters with high relative velocity probe the tail of the halo pairwise velocity distribution with the potential of providing a powerful test of cosmology. As an example it has been argued that the discovery of the Bullet Cluster challenges standard Λ cold dark matter (ΛCDM) model predictions. Halo catalogues from N-body simulations have been used to estimate the probability of Bullet-like clusters. However, due to simulation volume effects previous studies had to rely on a Gaussian extrapolation of the pairwise velocity distribution to high velocities. Here, we perform a detail analysis using the halo catalogues from the Dark Energy Universe Simulation Full Universe Runs (DEUS-FUR), which enables us to resolve the high-velocity tail of the distribution and study its dependence on the halo mass definition, redshift and cosmology. Building upon these results, we estimate the probability of Bullet-like systems in the framework of Extreme Value Statistics. We show that the tail of extreme pairwise velocities significantly deviates from that of a Gaussian, moreover it carries an imprint of the underlying cosmology. We find the Bullet Cluster probability to be two orders of magnitude larger than previous estimates, thus easing the tension with the ΛCDM model. Finally, the comparison of the inferred probabilities for the different DEUS-FUR cosmologies suggests that observations of extreme interacting clusters can provide constraints on dark energy models complementary to standard cosmological tests.

  2. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  3. Charge transport models for reliability engineering of semiconductor devices

    International Nuclear Information System (INIS)

    Bina, M.

    2014-01-01

    The simulation of semiconductor devices is important for the assessment of device lifetimes before production. In this context, this work investigates the influence of the charge carrier transport model on the accuracy of bias temperature instability and hot-carrier degradation models in MOS devices. For this purpose, a four-state defect model based on a non-radiative multi phonon (NMP) theory is implemented to study the bias temperature instability. However, the doping concentrations typically used in nano-scale devices correspond to only a small number of dopants in the channel, leading to fluctuations of the electrostatic potential. Thus, the granularity of the doping cannot be ignored in these devices. To study the bias temperature instability in the presence of fluctuations of the electrostatic potential, the advanced drift diffusion device simulator Minimos-NT is employed. In a first effort to understand the bias temperature instability in p-channel MOSFETs at elevated temperatures, data from direct-current-current-voltage measurements is successfully reproduced using a four-state defect model. Differences between the four-state defect model and the commonly employed trapping model from Shockley, Read and Hall (SRH) have been investigated showing that the SRH model is incapable of reproducing the measurement data. This is in good agreement with the literature, where it has been extensively shown that a model based on SRH theory cannot reproduce the characteristic time constants found in BTI recovery traces. Upon inspection of recorded recovery traces after bias temperature stress in n-channel MOSFETs it is found that the gate current is strongly correlated with the drain current (recovery trace). Using a random discrete dopant model and non-equilibrium greens functions it is shown that direct tunnelling cannot explain the magnitude of the gate current reduction. Instead it is found that trap-assisted tunnelling, modelled using NMP theory, is the cause of this

  4. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  5. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  6. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....

  7. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  8. Microstructural Modeling of Brittle Materials for Enhanced Performance and Reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Brittle failure is often influenced by difficult to measure and variable microstructure-scale stresses. Recent advances in photoluminescence spectroscopy (PLS), including improved confocal laser measurement and rapid spectroscopic data collection have established the potential to map stresses with microscale spatial resolution (%3C2 microns). Advanced PLS was successfully used to investigate both residual and externally applied stresses in polycrystalline alumina at the microstructure scale. The measured average stresses matched those estimated from beam theory to within one standard deviation, validating the technique. Modeling the residual stresses within the microstructure produced general agreement in comparison with the experimentally measured results. Microstructure scale modeling is primed to take advantage of advanced PLS to enable its refinement and validation, eventually enabling microstructure modeling to become a predictive tool for brittle materials.

  9. Modeling human intention formation for human reliability assessment

    International Nuclear Information System (INIS)

    Woods, D.D.; Roth, E.M.; Pople, H. Jr.

    1988-01-01

    This paper describes a dynamic simulation capability for modeling how people form intentions to act in nuclear power plant emergency situations. This modeling tool, Cognitive Environment Simulation or CES, was developed based on techniques from artificial intelligence. It simulates the cognitive processes that determine situation assessment and intention formation. It can be used to investigate analytically what situations and factors lead to intention failures, what actions follow from intention failures (e.g. errors of omission, errors of commission, common mode errors), the ability to recover from errors or additional machine failures, and the effects of changes in the NPP person machine system. One application of the CES modeling environment is to enhance the measurement of the human contribution to risk in probabilistic risk assessment studies. (author)

  10. Modelling Reliability of Supply and Infrastructural Dependency in Energy Distribution Systems

    OpenAIRE

    Helseth, Arild

    2008-01-01

    This thesis presents methods and models for assessing reliability of supply and infrastructural dependency in energy distribution systems with multiple energy carriers. The three energy carriers of electric power, natural gas and district heating are considered. Models and methods for assessing reliability of supply in electric power systems are well documented, frequently applied in the industry and continuously being subject to research and improvement. On the contrary, there are compar...

  11. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  12. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  13. Appraisal and Reliability of Variable Engagement Model Prediction ...

    African Journals Online (AJOL)

    The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

  14. Multi-state reliability for coolant pump based on dependent competitive failure model

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Zhao Xinwen; Chen Ling

    2013-01-01

    By taking into account the effect of degradation due to internal vibration and external shocks. and based on service environment and degradation mechanism of nuclear power plant coolant pump, a multi-state reliability model of coolant pump was proposed for the system that involves competitive failure process between shocks and degradation. Using this model, degradation state probability and system reliability were obtained under the consideration of internal vibration and external shocks for the degraded coolant pump. It provided an effective method to reliability analysis for coolant pump in nuclear power plant based on operating environment. The results can provide a decision making basis for design changing and maintenance optimization. (authors)

  15. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  16. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    Science.gov (United States)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  17. Scalar and joint velocity-scalar PDF modelling of near-wall turbulent heat transfer

    International Nuclear Information System (INIS)

    Pozorski, Jacek; Waclawczyk, Marta; Minier, Jean-Pierre

    2004-01-01

    The temperature field in a heated turbulent flow is considered as a dynamically passive scalar. The probability density function (PDF) method with down to the wall integration is explored and new modelling proposals are put forward, including the explicit account for the molecular transport terms. Two variants of the approach are considered: first, the scalar PDF method with the use of externally-provided turbulence statistics; and second, the joint (stand-alone) velocity-scalar PDF method where a near-wall model for dynamical variables is coupled with a model for temperature. The closure proposals are formulated in the Lagrangian setting and resulting stochastic evolution equations are solved with a Monte Carlo method. The near-wall region of a heated channel flow is taken as a validation case; the second-order thermal statistics are of a particular interest. The PDF computation results agree reasonably with available DNS data. The sensitivity of results to the molecular Prandtl number and to the thermal wall boundary condition is accounted for

  18. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  19. Site-response Estimation by 1D Heterogeneous Velocity Model using Borehole Log and its Relationship to Damping Factor

    International Nuclear Information System (INIS)

    Sato, Hiroaki

    2014-01-01

    In the Niigata area, which suffered from several large earthquakes such as the 2007 Chuetsu-oki earthquake, geographical observation that elucidates the S-wave structure of the underground is advancing. Modeling of S-wave velocity structure in the subsurface is underway to enable simulation of long-period ground motion. The one-dimensional velocity model by inverse analysis of micro-tremors is sufficiently appropriate for long-period site response but not for short-period, which is important for ground motion evaluation at NPP sites. The high-frequency site responses may be controlled by the strength of heterogeneity of underground structure because the heterogeneity of the 1D model plays an important role in estimating high-frequency site responses and is strongly related to the damping factor of the 1D layered velocity model. (author)

  20. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  1. Velocity Models of the Upper Mantle Beneath the MER, Somali Platform, and Ethiopian Highlands from Body Wave Tomography

    Science.gov (United States)

    Hariharan, A.; Keranen, K. M.; Alemayehu, S.; Ayele, A.; Bastow, I. D.; Eilon, Z.

    2016-12-01

    The Main Ethiopian Rift (MER) presents a unique opportunity to improve our understanding of an active continental rift. Here we use body wave tomography to generate compressional and shear wave velocity models of the region beneath the rift. The models help us understand the rifting process over the broader region around the MER, extending the geographic region beyond that captured in past studies. We use differential arrival times of body waves from teleseismic earthquakes and multi-channel cross correlation to generate travel time residuals relative to the global IASP91 1-d velocity model. The events used for the tomographic velocity model include 200 teleseismic earthquakes with moment magnitudes greater than 5.5 from our recent 2014-2016 deployment in combination with 200 earthquakes from the earlier EBSE and EAGLE deployments (Bastow et al. 2008). We use the finite-frequency tomography analysis of Schmandt et al. (2010), which uses a first Fresnel zone paraxial approximation to the Born theoretical kernel with spatial smoothing and model norm damping in an iterative LSQR algorithm. Results show a broad, slow region beneath the rift with a distinct low-velocity anomaly beneath the northwest shoulder. This robust and well-resolved low-velocity anomaly is visible at a range of depths beneath the Ethiopian plateau, within the footprint of Oligocene flood basalts, and near surface expressions of diking. We interpret this anomaly as a possible plume conduit, or a low-velocity finger rising from a deeper, larger plume. Within the rift, results are consistent with previous work, exhibiting rift segmentation and low-velocities beneath the rift valley.

  2. Quantification of Wave Model Uncertainties Used for Probabilistic Reliability Assessments of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2015-01-01

    Wave models used for site assessments are subjected to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Four different wave models are considered, and validation...... data are collected from published scientific research. The bias and the root-mean-square error, as well as the scatter index, are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example, this paper presents how the quantified...... uncertainties can be implemented in probabilistic reliability assessments....

  3. Determination of Wave Model Uncertainties used for Probabilistic Reliability Assessments of Wave Energy Devices

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2014-01-01

    Wave models used for site assessments are subject to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Considered are four different wave models and validation...... data is collected from published scientific research. The bias, the root-mean-square error as well as the scatter index are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example it is shown how the estimated uncertainties can...... be implemented in probabilistic reliability assessments....

  4. On new cautious structural reliability models in the framework of imprecise probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev; Kozine, Igor

    2010-01-01

    measures when the number of events of interest or observations is very small. The main feature of the models is that prior ignorance is not modelled by a fixed single prior distribution, but by a class of priors which is defined by upper and lower probabilities that can converge as statistical data......New imprecise structural reliability models are described in this paper. They are developed based on the imprecise Bayesian inference and are imprecise Dirichlet, imprecise negative binomial, gamma-exponential and normal models. The models are applied to computing cautious structural reliability...

  5. A Structural Reliability Business Process Modelling with System Dynamics Simulation

    OpenAIRE

    Lam, C. Y.; Chan, S. L.; Ip, W. H.

    2010-01-01

    Business activity flow analysis enables organizations to manage structured business processes, and can thus help them to improve performance. The six types of business activities identified here (i.e., SOA, SEA, MEA, SPA, MSA and FIA) are correlated and interact with one another, and the decisions from any business activity form feedback loops with previous and succeeding activities, thus allowing the business process to be modelled and simulated. For instance, for any company that is eager t...

  6. P-wave velocity changes in freezing hard low-porosity rocks: a laboratory-based time-average model

    Directory of Open Access Journals (Sweden)

    D. Draebing

    2012-10-01

    Full Text Available P-wave refraction seismics is a key method in permafrost research but its applicability to low-porosity rocks, which constitute alpine rock walls, has been denied in prior studies. These studies explain p-wave velocity changes in freezing rocks exclusively due to changing velocities of pore infill, i.e. water, air and ice. In existing models, no significant velocity increase is expected for low-porosity bedrock. We postulate, that mixing laws apply for high-porosity rocks, but freezing in confined space in low-porosity bedrock also alters physical rock matrix properties. In the laboratory, we measured p-wave velocities of 22 decimetre-large low-porosity (< 10% metamorphic, magmatic and sedimentary rock samples from permafrost sites with a natural texture (> 100 micro-fissures from 25 °C to −15 °C in 0.3 °C increments close to the freezing point. When freezing, p-wave velocity increases by 11–166% perpendicular to cleavage/bedding and equivalent to a matrix velocity increase from 11–200% coincident to an anisotropy decrease in most samples. The expansion of rigid bedrock upon freezing is restricted and ice pressure will increase matrix velocity and decrease anisotropy while changing velocities of the pore infill are insignificant. Here, we present a modified Timur's two-phase-equation implementing changes in matrix velocity dependent on lithology and demonstrate the general applicability of refraction seismics to differentiate frozen and unfrozen low-porosity bedrock.

  7. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    Science.gov (United States)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation

  8. Modelling the average velocity of propagation of the flame front in a gasoline engine with hydrogen additives

    Science.gov (United States)

    Smolenskaya, N. M.; Smolenskii, V. V.

    2018-01-01

    The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.

  9. Effect of Low Co-flow Air Velocity on Hydrogen-air Non-premixed Turbulent Flame Model

    Directory of Open Access Journals (Sweden)

    Noor Mohsin Jasim

    2017-08-01

    Full Text Available The aim of this paper is to provide information concerning the effect of low co-flow velocity on the turbulent diffusion flame for a simple type of combustor, a numerical simulated cases of turbulent diffusion hydrogen-air flame are performed. The combustion model used in this investigation is based on chemical equilibrium and kinetics to simplify the complexity of the chemical mechanism. Effects of increased co-flowing air velocity on temperature, velocity components (axial and radial, and reactants have been investigated numerically and examined. Numerical results for temperature are compared with the experimental data. The comparison offers a good agreement. All numerical simulations have been performed using the Computational Fluid Dynamics (CFD commercial code FLUENT. A comparison among the various co-flow air velocities, and their effects on flame behavior and temperature fields are presented.

  10. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  11. Modelling and Simulation of Tensile Fracture in High Velocity Compacted Metal Powder

    International Nuclear Information System (INIS)

    Jonsen, P.; Haeggblad, H.-A.

    2007-01-01

    In cold uniaxial powder compaction, powder is formed into a desired shape with rigid tools and a die. After pressing, but before sintering, the compacted powder is called green body. A critical property in the metal powder pressing process is the mechanical properties of the green body. Beyond a green body free from defects, desired properties are high strength and uniform density. High velocity compaction (HVC) using a hydraulic operated hammer is a production method to form powder utilizing a shock wave. Pre-alloyed water atomised iron powder has been HVC-formed into circular discs with high densities. The diametral compression test also called the Brazilian disc test is an established method to measure tensile strength in low strength material like e.g. rock, concrete, polymers and ceramics. During the test a thin disc is compressed across the diameter to failure. The compression induces a tensile stress perpendicular to the compressed diameter. In this study the test have been used to study crack initiation and the tensile fracture process of HVC-formed metal powder discs with a relative density of 99%. A fictitious crack model controlled by a stress versus crack-width relationship is utilized to model green body cracking. Tensile strength is used as a failure condition and limits the stress in the fracture interface. The softening rate of the model is obtained from the corresponding rate of the dissipated energy. The deformation of the powder material is modelled with an elastic-plastic Cap model. The characteristics of the tensile fracture development of the central crack in a diametrically loaded specimen is numerically studied with a three dimensional finite element simulation. Results from the finite element simulation of the diametral compression test shows that it is possible to simulate fracturing of HVC-formed powder. Results from the simulation agree reasonably with experiments

  12. Critical velocity and anaerobic paddling capacity determined by different mathematical models and number of predictive trials in canoe slalom.

    Science.gov (United States)

    Messias, Leonardo H D; Ferrari, Homero G; Reis, Ivan G M; Scariot, Pedro P M; Manchado-Gobatto, Fúlvia B

    2015-03-01

    The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1) were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter) and anaerobic paddling capacity (APC-anaerobic parameter) were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity). Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC) was considered as the four trials (150, 300, 450 and 600 m). High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00). Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006) and APC (p = 0.016) as well as R² (p = 0.033). Estimates obtained from the first (1) and the fourth (4) predictive trials (150 m = lowest; and 600 m = highest, respectively) were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC) with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates. Key pointsGreat attention must be given for methodological concerns regarding critical velocity protocol applied on canoe slalom, since different estimates were obtained depending on the mathematical model and the predictive trials used.Linear 1 showed the best fits of regression. Furthermore, to the best of our knowledge and considering practical applications, this model is the easiest one to calculate the estimates from critical velocity protocol. Considering this, the abyss between science

  13. Application of one-dimensional model to calculate water velocity distributions over elastic elements simulating Canadian waterweed plants (Elodea Canadensis)

    Science.gov (United States)

    Kubrak, Elżbieta; Kubrak, Janusz; Rowiński, Paweł

    2013-02-01

    One-dimensional model for vertical profiles of longitudinal velocities in open-channel flows is verified against laboratory data obtained in an open channel with artificial plants. Those plants simulate Canadian waterweed which in nature usually forms dense stands that reach all the way to the water surface. The model works particularly well for densely spaced plants.

  14. Tracking reliability for space cabin-borne equipment in development by Crow model.

    Science.gov (United States)

    Chen, J D; Jiao, S J; Sun, H L

    2001-12-01

    Objective. To study and track the reliability growth of manned spaceflight cabin-borne equipment in the course of its development. Method. A new technique of reliability growth estimation and prediction, which is composed of the Crow model and test data conversion (TDC) method was used. Result. The estimation and prediction value of the reliability growth conformed to its expectations. Conclusion. The method could dynamically estimate and predict the reliability of the equipment by making full use of various test information in the course of its development. It offered not only a possibility of tracking the equipment reliability growth, but also the reference for quality control in manned spaceflight cabin-borne equipment design and development process.

  15. Two-phase modeling of DDT: Structure of the velocity-relaxation zone

    International Nuclear Information System (INIS)

    Kapila, A.K.; Son, S.F.; Bdzil, J.B.; Menikoff, R.; Stewart, D.S.

    1997-01-01

    The structure of the velocity relaxation zone in a hyperbolic, nonconservative, two-phase model is examined in the limit of large drag, and in the context of the problem of deflagration-to-detonation transition in a granular explosive. The primary motivation for the study is the desire to relate the end states across the relaxation zone, which can then be treated as a discontinuity in a reduced, equivelocity model, that is computationally more efficient than its parent. In contrast to a conservative system, where end states across thin zones of rapid variation are determined principally by algebraic statements of conservation, the nonconservative character of the present system requires an explicit consideration of the structure. Starting with the minimum admissible wave speed, the structure is mapped out as the wave speed increases. Several critical wave speeds corresponding to changes in the structure are identified. The archetypal structure is partly dispersed, monotonic, and involves conventional hydrodynamic shocks in one or both phases. The picture is reminiscent of, but more complex than, what is observed in such (simpler) two-phase media as a dusty gas. copyright 1997 American Institute of Physics

  16. A fast iterative model for discrete velocity calculations on triangular grids

    International Nuclear Information System (INIS)

    Szalmas, Lajos; Valougeorgis, Dimitris

    2010-01-01

    A fast synthetic type iterative model is proposed to speed up the slow convergence of discrete velocity algorithms for solving linear kinetic equations on triangular lattices. The efficiency of the scheme is verified both theoretically by a discrete Fourier stability analysis and computationally by solving a rarefied gas flow problem. The stability analysis of the discrete kinetic equations yields the spectral radius of the typical and the proposed iterative algorithms and reveal the drastically improved performance of the latter one for any grid resolution. This is the first time that stability analysis of the full discrete kinetic equations related to rarefied gas theory is formulated, providing the detailed dependency of the iteration scheme on the discretization parameters in the phase space. The corresponding characteristics of the model deduced by solving numerically the rarefied gas flow through a duct with triangular cross section are in complete agreement with the theoretical findings. The proposed approach may open a way for fast computation of rarefied gas flows on complex geometries in the whole range of gas rarefaction including the hydrodynamic regime.

  17. A P-wave velocity model of the upper crust of the Sannio region (Southern Apennines, Italy

    Directory of Open Access Journals (Sweden)

    M. Cocco

    1998-06-01

    Full Text Available This paper describes the results of a seismic refraction profile conducted in October 1992 in the Sannio region, Southern Italy, to obtain a detailed P-wave velocity model of the upper crust. The profile, 75 km long, extended parallel to the Apenninic chain in a region frequently damaged in historical time by strong earthquakes. Six shots were fired at five sites and recorded by a number of seismic stations ranging from 41 to 71 with a spacing of 1-2 km along the recording line. We used a two-dimensional raytracing technique to model travel times and amplitudes of first and second arrivals. The obtained P-wave velocity model has a shallow structure with strong lateral variations in the southern portion of the profile. Near surface sediments of the Tertiary age are characterized by seismic velocities in the 3.0-4.1 km/s range. In the northern part of the profile these deposits overlie a layer with a velocity of 4.8 km/s that has been interpreted as a Mesozoic sedimentary succession. A high velocity body, corresponding to the limestones of the Western Carbonate Platform with a velocity of 6 km/s, characterizes the southernmost part of the profile at shallow depths. At a depth of about 4 km the model becomes laterally homogeneous showing a continuous layer with a thickness in the 3-4 km range and a velocity of 6 km/s corresponding to the Meso-Cenozoic limestone succession of the Apulia Carbonate Platform. This platform appears to be layered, as indicated by an increase in seismic velocity from 6 to 6.7 km/s at depths in the 6-8 km range, that has been interpreted as a lithological transition from limestones to Triassic dolomites and anhydrites of the Burano formation. A lower P-wave velocity of about 5.0-5.5 km/s is hypothesized at the bottom of the Apulia Platform at depths ranging from 10 km down to 12.5 km; these low velocities could be related to Permo-Triassic siliciclastic deposits of the Verrucano sequence drilled at the bottom of the Apulia

  18. Modeling Manufacturing Impacts on Aging and Reliability of Polyurethane Foams

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R.; Roberts, Christine Cardinal; Mondy, Lisa Ann; Soehnel, Melissa Marie; Johnson, Kyle; Lorenzo, Henry T.

    2016-10-01

    Polyurethane is a complex multiphase material that evolves from a viscous liquid to a system of percolating bubbles, which are created via a CO2 generating reaction. The continuous phase polymerizes to a solid during the foaming process generating heat. Foams introduced into a mold increase their volume up to tenfold, and the dynamics of the expansion process may lead to voids and will produce gradients in density and degree of polymerization. These inhomogeneities can lead to structural stability issues upon aging. For instance, structural components in weapon systems have been shown to change shape as they age depending on their molding history, which can threaten critical tolerances. The purpose of this project is to develop a Cradle-to-Grave multiphysics model, which allows us to predict the material properties of foam from its birth through aging in the stockpile, where its dimensional stability is important.

  19. Assessing Reliability of Cellulose Hydrolysis Models to Support Biofuel Process Design – Identifiability and Uncertainty Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Meyer, Anne S.; Gernaey, Krist

    2010-01-01

    The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done in the ori......The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done...

  20. Determination of anisotropic velocity model by reflection tomography of compression and shear modes; Determination de modele de vitesse anisotrope par tomographie de reflexion des modes de compression et de cisaillement

    Energy Technology Data Exchange (ETDEWEB)

    Stopin, A.

    2001-12-01

    As the jump from 2D to 3D, seismic exploration lives a new revolution with the use of converted PS waves. Indeed PS converted waves are proving their potential as a tool for imaging through gas; lithology discrimination; structural confirmation; and more. Nevertheless, processing converted shear data and in particular determining accurate P and S velocity models for depth imaging of these data is still a challenging problem, especially when the subsurface is anisotropic. To solve this velocity model determination problem we propose to use reflection travel time tomography. In a first step, we derive a new approximation of the exact phase velocity equation of the SV wave in anisotropic (TI) media. This new approximation is valid for non-weak anisotropy and is mathematically simpler to handle than the exact equation. Then, starting from an isotropic reflection tomography tool developed at Lt-'P, we extend the isotropic bending ray tracing method to the anisotropic case and we implement the quantities necessary for the determination of the anisotropy parameters from the travel time data. Using synthetic data we then study the influence of the different anisotropy parameters on the travel times. From this analysis we propose a methodology to determine a complete anisotropic subsurface model (P and S layer velocities, interface geometries, anisotropy parameters). Finally, on a real data set from the Gulf of Mexico we demonstrate that this new anisotropic reflection tomography tool allows us to obtain a reliable subsurface model yielding kinematically correct and mutually coherent PP and PS images in depth; such a result could not be obtained with an isotropic velocity model. Similar results are obtained on a North Sea data set. (author)

  1. Determination of anisotropic velocity model by reflection tomography of compression and shear modes; Determination de modele de vitesse anisotrope par tomographie de reflexion des modes de compression et de cisaillement

    Energy Technology Data Exchange (ETDEWEB)

    Stopin, A

    2001-12-01

    As the jump from 2D to 3D, seismic exploration lives a new revolution with the use of converted PS waves. Indeed PS converted waves are proving their potential as a tool for imaging through gas; lithology discrimination; structural confirmation; and more. Nevertheless, processing converted shear data and in particular determining accurate P and S velocity models for depth imaging of these data is still a challenging problem, especially when the subsurface is anisotropic. To solve this velocity model determination problem we propose to use reflection travel time tomography. In a first step, we derive a new approximation of the exact phase velocity equation of the SV wave in anisotropic (TI) media. This new approximation is valid for non-weak anisotropy and is mathematically simpler to handle than the exact equation. Then, starting from an isotropic reflection tomography tool developed at Lt-'P, we extend the isotropic bending ray tracing method to the anisotropic case and we implement the quantities necessary for the determination of the anisotropy parameters from the travel time data. Using synthetic data we then study the influence of the different anisotropy parameters on the travel times. From this analysis we propose a methodology to determine a complete anisotropic subsurface model (P and S layer velocities, interface geometries, anisotropy parameters). Finally, on a real data set from the Gulf of Mexico we demonstrate that this new anisotropic reflection tomography tool allows us to obtain a reliable subsurface model yielding kinematically correct and mutually coherent PP and PS images in depth; such a result could not be obtained with an isotropic velocity model. Similar results are obtained on a North Sea data set. (author)

  2. 3-D Velocity Model of the Coachella Valley, Southern California Based on Explosive Shots from the Salton Seismic Imaging Project

    Science.gov (United States)

    Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2014-12-01

    We have analyzed explosive shot data from the 2011 Salton Seismic Imaging Project (SSIP) across a 2-D seismic array and 5 profiles in the Coachella Valley to produce a 3-D P-wave velocity model that will be used in calculations of strong ground shaking. Accurate maps of seismicity and active faults rely both on detailed geological field mapping and a suitable velocity model to accurately locate earthquakes. Adjoint tomography of an older version of the SCEC 3-D velocity model shows that crustal heterogeneities strongly influence seismic wave propagation from moderate earthquakes (Tape et al., 2010). These authors improve the crustal model and subsequently simulate the details of ground motion at periods of 2 s and longer for hundreds of ray paths. Even with improvements such as the above, the current SCEC velocity model for the Salton Trough does not provide a match of the timing or waveforms of the horizontal S-wave motions, which Wei et al. (2013) interpret as caused by inaccuracies in the shallow velocity structure. They effectively demonstrate that the inclusion of shallow basin structure improves the fit in both travel times and waveforms. Our velocity model benefits from the inclusion of known location and times of a subset of 126 shots detonated over a 3-week period during the SSIP. This results in an improved velocity model particularly in the shallow crust. In addition, one of the main challenges in developing 3-D velocity models is an uneven stations-source distribution. To better overcome this challenge, we also include the first arrival times of the SSIP shots at the more widely spaced Southern California Seismic Network (SCSN) in our inversion, since the layout of the SSIP is complementary to the SCSN. References: Tape, C., et al., 2010, Seismic tomography of the Southern California crust based on spectral-element and adjoint methods: Geophysical Journal International, v. 180, no. 1, p. 433-462. Wei, S., et al., 2013, Complementary slip distributions

  3. Modeling skin temperature to assess the effect of air velocity to mitigate heat stress among growing pigs

    DEFF Research Database (Denmark)

    Bjerg, Bjarne; Pedersen, Poul; Morsing, Svend

    2017-01-01

    It is generally accepted that increased air velocity can help to mitigate heat stress in livestock housing, however, it is not fully clear how much it helps and significant uncertainties exists when the air temperature approaches the animal body temperature. This study aims to develop a skin...... temperature model to generated data for determining the potential effect of air velocity to mitigate heat stress among growing pigs housed in warm environment. The model calculates the skin temperature as function of body temperature, air temperature and the resistances for heat transfer from the body...

  4. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  5. On reliability and maintenance modelling of ageing equipment in electric power systems

    International Nuclear Information System (INIS)

    Lindquist, Tommie

    2008-04-01

    Maintenance optimisation is essential to achieve cost-efficiency, availability and reliability of supply in electric power systems. The process of maintenance optimisation requires information about the costs of preventive and corrective maintenance, as well as the costs of failures borne by both electricity suppliers and customers. To calculate expected costs, information is needed about equipment reliability characteristics and the way in which maintenance affects equipment reliability. The aim of this Ph.D. work has been to develop equipment reliability models taking the effect of maintenance into account. The research has focussed on the interrelated areas of condition estimation, reliability modelling and maintenance modelling, which have been investigated in a number of case studies. In the area of condition estimation two methods to quantitatively estimate the condition of disconnector contacts have been developed, which utilise results from infrared thermography inspections and contact resistance measurements. The accuracy of these methods were investigated in two case studies. Reliability models have been developed and implemented for SF6 circuit-breakers, disconnector contacts and XLPE cables in three separate case studies. These models were formulated using both empirical and physical modelling approaches. To improve confidence in such models a Bayesian statistical method incorporating information from the equipment design process was also developed. This method was illustrated in a case study of SF6 circuit-breaker operating rods. Methods for quantifying the effect of maintenance on equipment condition and reliability have been investigated in case studies on disconnector contacts and SF6 circuit-breakers. The input required by these methods are condition measurements and historical failure and maintenance data, respectively. This research has demonstrated that the effect of maintenance on power system equipment may be quantified using available data

  6. Damage Model for Reliability Assessment of Solder Joints in Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    environmental factors. Reliability assessment for such type of products conventionally is performed by classical reliability techniques based on test data. Usually conventional reliability approaches are time and resource consuming activities. Thus in this paper we choose a physics of failure approach to define...... damage model by Miner’s rule. Our attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Based on the proposed method it is described how to find the damage level for a given temperature loading profile. The proposed method is discussed...

  7. The model case IRS-RWE for the determination of reliability data in practical operation

    International Nuclear Information System (INIS)

    Hoemke, P.; Krause, H.

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. This paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection will be described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results. (orig.) [de

  8. Investigation of reliability indicators of information analysis systems based on Markov’s absorbing chain model

    Science.gov (United States)

    Gilmanshin, I. R.; Kirpichnikov, A. P.

    2017-09-01

    In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.

  9. Maintenance overtime policies in reliability theory models with random working cycles

    CERN Document Server

    Nakagawa, Toshio

    2015-01-01

    This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...

  10. Reliable software systems via chains of object models with provably correct behavior

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This work addresses specification and design of reliable safety-critical systems, such as nuclear reactor control systems. Reliability concerns are addressed in complimentary fashion by different fields. Reliability engineers build software reliability models, etc. Safety engineers focus on prevention of potential harmful effects of systems on environment. Software/hardware correctness engineers focus on production of reliable systems on the basis of mathematical proofs. The authors think that correctness may be a crucial guiding issue in the development of reliable safety-critical systems. However, purely formal approaches are not adequate for the task, because they neglect the connection with the informal customer requirements. They alleviate that as follows. First, on the basis of the requirements, they build a model of the system interactions with the environment, where the system is viewed as a black box. They will provide foundations for automated tools which will (a) demonstrate to the customer that all of the scenarios of system behavior are presented in the model, (b) uncover scenarios not present in the requirements, and (c) uncover inconsistent scenarios. The developers will work with the customer until the black box model will not possess scenarios (b) and (c) above. Second, the authors will build a chain of several increasingly detailed models, where the first model is the black box model and the last model serves to automatically generated proved executable code. The behavior of each model will be proved to conform to the behavior of the previous one. They build each model as a cluster of interactive concurrent objects, thus they allow both top-down and bottom-up development

  11. Analytical and Mathematical Modeling and Optimization of Fiber Metal Laminates (FMLs subjected to low-velocity impact via combined response surface regression and zero-One programming

    Directory of Open Access Journals (Sweden)

    Faramarz Ashenai Ghasemi

    Full Text Available This paper presents analytical and mathematical modeling and optimization of the dynamic behavior of the fiber metal laminates (FMLs subjected to low-velocity impact. The deflection to thickness (w/h ratio has been identified through the governing equations of the plate that are solved using the first-order shear deformation theory as well as the Fourier series method. With the help of a two degrees-of-freedom system, consisting of springs-masses, and the Choi's linearized Hertzian contact model the interaction between the impactor and the plate is modeled. Thirty-one experiments are conducted on samples of different layer sequences and volume fractions of Al plies in the composite Structures. A reliable fitness function in the form of a strict linear mathematical function constructed. Using an ordinary least square method, response regression coefficients estimated and a zero-one programming technique proposed to optimize the FML plate behavior subjected to any technological or cost restrictions. The results indicated that FML plate behavior is highly affected by layer sequences and volume fractions of Al plies. The results also showed that, embedding Al plies at outer layers of the structure significantly results in a better response of the structure under low-velocity impact, instead of embedding them in the middle or middle and outer layers of the structure.

  12. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    International Nuclear Information System (INIS)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-01-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

  13. Visualisation of the velocity field in a scaled water model for validation of numerical calculations for a powder fuelled boiler

    Energy Technology Data Exchange (ETDEWEB)

    Dumortier, Laurent [Luleaa Univ. of Technology (Sweden)

    2001-01-01

    Validation of numerical predictions of the flow field in a powder fired industry boiler by flow visualisation in a water model has been studied. The bark powder fired boiler at AssiDomaen Kraftliner in Piteaa has been used as a case study. A literature study covering modelling of combusting flows by water models and different flow visualisation techniques has been carried out. The main conclusion as regards the use of water models is that only qualitative information can be expected. As far as turbulent flow is assured in the model as well as the real furnace, the same Reynolds number is not required. Geometrical similarity is important but modelling of burner jets requires adaptation of the jet diameters in the model. Guidelines for this are available and are presented in the report. The review of visualisation techniques shows that a number of methods have been used successfully for validation of flow field predictions. The conclusion is that the Particle Image Velocimetry and Particle Tracking Velocimetry methods could be very suitable for validation purposes provided that optical access is possible. The numerical predictions include flow fields in a 1130 scale model of the AssiDomaen furnace with water flow as well as flow and temperature fields in the actual furnace. Two burner arrangements were considered both for the model and the actual furnace, namely the present configuration with four front burners and a proposed modification where an additional burner is positioned at a side wall below the other burners. There are many similarities between the predicted flow fields in the model and the full scale furnace but there are also some differences, in particular in the region above the burners and the effects of the low region re-circulation on the lower burner jets. The experiments with the water model have only included the arrangement with four front burners. There were problems determining the velocities in the jets and the comparisons with predictions are

  14. Developing regionalized models of lithospheric thickness and velocity structure across Eurasia and the Middle East from jointly inverting P-wave and S-wave receiver functions with Rayleigh wave group and phase velocities

    Energy Technology Data Exchange (ETDEWEB)

    Julia, J; Nyblade, A; Hansen, S; Rodgers, A; Matzel, E

    2009-07-06

    In this project, we are developing models of lithospheric structure for a wide variety of tectonic regions throughout Eurasia and the Middle East by regionalizing 1D velocity models obtained by jointly inverting P-wave and S-wave receiver functions with Rayleigh wave group and phase velocities. We expect the regionalized velocity models will improve our ability to predict travel-times for local and regional phases, such as Pg, Pn, Sn and Lg, as well as travel-times for body-waves at upper mantle triplication distances in both seismic and aseismic regions of Eurasia and the Middle East. We anticipate the models will help inform and strengthen ongoing and future efforts within the NNSA labs to develop 3D velocity models for Eurasia and the Middle East, and will assist in obtaining model-based predictions where no empirical data are available and for improving locations from sparse networks using kriging. The codes needed to conduct the joint inversion of P-wave receiver functions (PRFs), S-wave receiver functions (SRFs), and dispersion velocities have already been assembled as part of ongoing research on lithospheric structure in Africa. The methodology has been tested with synthetic 'data' and case studies have been investigated with data collected at an open broadband stations in South Africa. PRFs constrain the size and S-P travel-time of seismic discontinuities in the crust and uppermost mantle, SRFs constrain the size and P-S travel-time of the lithosphere-asthenosphere boundary, and dispersion velocities constrain average S-wave velocity within frequency-dependent depth-ranges. Preliminary results show that the combination yields integrated 1D velocity models local to the recording station, where the discontinuities constrained by the receiver functions are superimposed to a background velocity model constrained by the dispersion velocities. In our first year of this project we will (i) generate 1D velocity models for open broadband seismic stations

  15. On modeling human reliability in space flights - Redundancy and recovery operations

    Science.gov (United States)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  16. Access to the kinematic information for the velocity model determination by 3-D reflexion tomography; Acces a l'information cinematique pour la determination du modele de vitesse par tomographie de reflexion 3D

    Energy Technology Data Exchange (ETDEWEB)

    Broto, K.

    1999-04-01

    The access to a reliable image of the subsurface requires a kinematically correct velocity depth model.Reflection tomography allows to meet this requirement if a complete and coherent pre-stack kinematic database can be provided. However, in case of complex sub-surfaces, wave propagation may lead to hardly interpretable seismic events in the time data. The SMART method is a sequential method that relies on reflection tomography for updating the velocity model and on the pre-stack depth migrated domain for extracting kinematic information that is not readily accessible in the time domain. For determining 3-D subsurface velocity models in case of complex structures, we propose the seriated SMART 2-D method as an alternative to the currently inconceivable SMART 3-D method. In order to extract kinematic information from a 3-D pre-stack data set, we combine detours through the 2-D pre-stack depth domain for a number of selected lines of the studied 3-D survey and 3-D reflection tomography for updating the velocity model. The travel-times from the SMART method being independent of the velocity model used for passing through the pre-stack depth migrated domain, the access to 3-D travel-times is ensured, even if they have been obtained via a 2-D domain. Besides, we propose to build a kinematical guide for ensuring the coherency of the seriated 2-D pre-stack depth interpretations and the access to a complete 3-D pre-stack kinematic database when dealing with structures associated with 3-D wave propagation. We opt for a blocky representation of the velocity model in order to be able to cope with complex structures. This representation leads us to define specific methodological rules for carrying out the different steps of the seriated SMART 2-D method. We also define strategies, built from the analysis of first inversion results, for an efficient application of reflection tomography. Besides, we discuss the problem of uncertainties to be assigned to travel-times obtained

  17. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  18. Defect evolution in cosmology and condensed matter quantitative analysis with the velocity-dependent one-scale model

    CERN Document Server

    Martins, C J A P

    2016-01-01

    This book sheds new light on topological defects in widely differing systems, using the Velocity-Dependent One-Scale Model to better understand their evolution. Topological defects – cosmic strings, monopoles, domain walls or others - necessarily form at cosmological (and condensed matter) phase transitions. If they are stable and long-lived they will be fossil relics of higher-energy physics. Understanding their behaviour and consequences is a key part of any serious attempt to understand the universe, and this requires modelling their evolution. The velocity-dependent one-scale model is the only fully quantitative model of defect network evolution, and the canonical model in the field. This book provides a review of the model, explaining its physical content and describing its broad range of applicability.

  19. Structural reliability analysis under evidence theory using the active learning kriging model

    Science.gov (United States)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  20. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  1. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  2. Relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro: Application of a stratified model

    Science.gov (United States)

    Lee, Kang Il

    2012-08-01

    The present study aims to provide insight into the relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21-0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  3. Study of the velocity distribution influence upon the pressure pulsations in draft tube model of hydro-turbine

    Science.gov (United States)

    Sonin, V.; Ustimenko, A.; Kuibin, P.; Litvinov, I.; Shtork, S.

    2016-11-01

    One of the mechanisms of generation of powerful pressure pulsations in the circuit of the turbine is a precessing vortex core, formed behind the runner at the operation points with partial or forced loads, when the flow has significant residual swirl. To study periodic pressure pulsations behind the runner the authors of this paper use approaches of experimental modeling and methods of computational fluid dynamics. The influence of velocity distributions at the output of the hydro turbine runner on pressure pulsations was studied based on analysis of the existing and possible velocity distributions in hydraulic turbines and selection of the distribution in the extended range. Preliminary numerical calculations have showed that the velocity distribution can be modeled without reproduction of the entire geometry of the circuit, using a combination of two blade cascades of the rotor and stator. Experimental verification of numerical results was carried out in an air bench, using the method of 3D-printing for fabrication of the blade cascades and the geometry of the draft tube of hydraulic turbine. Measurements of the velocity field at the input to a draft tube cone and registration of pressure pulsations due to precessing vortex core have allowed building correlations between the velocity distribution character and the amplitude-frequency characteristics of the pulsations.

  4. Proof of Concept: Model Based Bionic Muscle with Hyperbolic Force-Velocity Relation

    Directory of Open Access Journals (Sweden)

    D. F. B. Haeufle

    2012-01-01

    Full Text Available Recently, the hyperbolic Hill-type force-velocity relation was derived from basic physical components. It was shown that a contractile element CE consisting of a mechanical energy source (active element AE, a parallel damper element (PDE, and a serial element (SE exhibits operating points with hyperbolic force-velocity dependency. In this paper, a technical proof of this concept was presented. AE and PDE were implemented as electric motors, SE as a mechanical spring. The force-velocity relation of this artificial CE was determined in quick release experiments. The CE exhibited hyperbolic force-velocity dependency. This proof of concept can be seen as a well-founded starting point for the development of Hill-type artificial muscles.

  5. Impact of Assimilating Surface Velocity Observations on the Model Sea Surface Height Using the NCOM-4DVAR

    Science.gov (United States)

    2016-09-26

    the ensemble Kalman filter and the ensemble Kalman smoother: A comparison study using a nonlinear reduced gravity ocean model.OceanModell., 12, 378...using local ensemble transform Kalman filter and optimum-interpolation assimilation schemes. Ocean Modell., 69, 22–38, doi:10.1016/j.ocemod.2013.05.002...observations are assimi- lated. This gives a sense of the added value from the inclusion of velocity observations with the standard set of temperature

  6. Value-Added Models for Teacher Preparation Programs: Validity and Reliability Threats, and a Manageable Alternative

    Science.gov (United States)

    Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James

    2016-01-01

    High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…

  7. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  8. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  9. Horizontal and vertical velocities derived from the IDS contribution to ITRF2014, and comparisons with geophysical models

    Science.gov (United States)

    Moreaux, G.; Lemoine, F. G.; Argus, D. F.; Santamaría-Gómez, A.; Willis, P.; Soudarin, L.; Gravelle, M.; Ferrage, P.

    2016-10-01

    In the context of the 2014 realization of the International Terrestrial Reference Frame, the International DORIS (Doppler Orbitography Radiopositioning Integrated by Satellite) Service (IDS) has delivered to the IERS a set of 1140 weekly SINEX files including station coordinates and Earth orientation parameters, covering the time period from 1993.0 to 2015.0. From this set of weekly SINEX files, the IDS combination centre estimated a cumulative DORIS position and velocity solution to obtain mean horizontal and vertical motion of 160 stations at 71 DORIS sites. The main objective of this study is to validate the velocities of the DORIS sites by comparison with external models or time-series. Horizontal velocities are compared with two recent global plate models (GEODVEL 2010 and NNR-MORVEL56). Prior to the comparisons, DORIS horizontal velocities were corrected for Global Isostatic Adjustment from the ICE-6G (VM5a) model. For more than half of the sites, the DORIS horizontal velocities differ from the global plate models by less than 2-3 mm yr-1. For five of the sites (Arequipa, Dionysos/Gavdos, Manila and Santiago) with horizontal velocity differences with respect to these models larger than 10 mm yr-1, comparisons with GNSS estimates show the veracity of the DORIS motions. Vertical motions from the DORIS cumulative solution are compared with the vertical velocities derived from the latest GPS cumulative solution over the time span 1995.0-2014.0 from the University of La Rochelle solution at 31 co-located DORIS-GPS sites. These two sets of vertical velocities show a correlation coefficient of 0.83. Vertical differences are larger than 2 mm yr-1 at 23 percent of the sites. At Thule, the disagreement is explained by fine-tuned DORIS discontinuities in line with the mass variations of outlet glaciers. Furthermore, the time evolution of the vertical time-series from the DORIS station in Thule show similar trends to the GRACE equivalent water height.

  10. Business Cases for Microgrids: Modeling Interactions of Technology Choice, Reliability, Cost, and Benefit

    Science.gov (United States)

    Hanna, Ryan

    Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers

  11. Efficient scattering-angle enrichment for a nonlinear inversion of the background and perturbations components of a velocity model

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2017-01-01

    Reflection-waveform inversion (RWI) can help us reduce the nonlinearity of the standard full-waveform inversion (FWI) by inverting for the background velocity model using the wave-path of a single scattered wavefield to an image. However, current

  12. Estimation of urinary flow velocity in models of obstructed and unobstructed urethras by decorrelation of ultrasound radiofrequency signals

    NARCIS (Netherlands)

    Arif, M.; Idzenga, T.; Mastrigt, R. van; Korte, C.L. de

    2014-01-01

    The feasibility of estimating urinary flow velocity from the decorrelation of radiofrequency (RF) signals was investigated in soft tissue-mimicking models of obstructed and unobstructed urethras. The decorrelation was studied in the near field, focal zone and far field of the ultrasound beam.

  13. Reliability Measure Model for Assistive Care Loop Framework Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Venki Balasubramanian

    2010-01-01

    Full Text Available Body area wireless sensor networks (BAWSNs are time-critical systems that rely on the collective data of a group of sensor nodes. Reliable data received at the sink is based on the collective data provided by all the source sensor nodes and not on individual data. Unlike conventional reliability, the definition of retransmission is inapplicable in a BAWSN and would only lead to an elapsed data arrival that is not acceptable for time-critical application. Time-driven applications require high data reliability to maintain detection and responses. Hence, the transmission reliability for the BAWSN should be based on the critical time. In this paper, we develop a theoretical model to measure a BAWSN's transmission reliability, based on the critical time. The proposed model is evaluated through simulation and then compared with the experimental results conducted in our existing Active Care Loop Framework (ACLF. We further show the effect of the sink buffer in transmission reliability after a detailed study of various other co-existing parameters.

  14. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  15. A multi-state reliability evaluation model for P2P networks

    International Nuclear Information System (INIS)

    Fan Hehong; Sun Xiaohan

    2010-01-01

    The appearance of new service types and the convergence tendency of the communication networks have endowed the networks more and more P2P (peer to peer) properties. These networks can be more robust and tolerant for a series of non-perfect operational states due to the non-deterministic server-client distributions. Thus a reliability model taking into account of the multi-state and non-deterministic server-client distribution properties is needed for appropriate evaluation of the networks. In this paper, two new performance measures are defined to quantify the overall and local states of the networks. A new time-evolving state-transition Monte Carlo (TEST-MC) simulation model is presented for the reliability analysis of P2P networks in multiple states. The results show that the model is not only valid for estimating the traditional binary-state network reliability parameters, but also adequate for acquiring the parameters in a series of non-perfect operational states, with good efficiencies, especially for highly reliable networks. Furthermore, the model is versatile for the reliability and maintainability analyses in that both the links and the nodes can be failure-prone with arbitrary life distributions, and various maintainability schemes can be applied.

  16. Modelling of nuclear power plant control and instrumentation elements for automatic disturbance and reliability analysis

    International Nuclear Information System (INIS)

    Hollo, E.

    1985-08-01

    Present Final Report summarizes results of R/D work done within IAEA-VEIKI (Institute for Electrical Power Research, Budapest, Hungary) Research Contract No. 3210 during 3 years' period of 01.08.1982 - 31.08.1985. Chapter 1 lists main research objectives of the project. Main results obtained are summarized in Chapters 2 and 3. Outcomes from development of failure modelling methodologies and their application for C/I components of WWER-440 units are as follows (Chapter 2): improvement of available ''failure mode and effect analysis'' methods and mini-fault tree structures usable for automatic disturbance (DAS) and reliability (RAS) analysis; general classification and determination of functional failure modes of WWER-440 NPP C/I components; set up of logic models for motor operated control valves and rod control/drive mechanism. Results of development of methods and their application for reliability modelling of NPP components and systems cover (Chapter 3): development of an algorithm (computer code COMPREL) for component-related failure and reliability parameter calculation; reliability analysis of PAKS II NPP diesel system; definition of functional requirements for reliability data bank (RDB) in WWER-440 units. Determination of RDB input/output data structure and data manipulation services. Methods used are a-priori failure mode and effect analysis, combined fault tree/event tree modelling technique, structural computer programming, probability theory application to nuclear field

  17. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  18. Reliability Analysis of Sealing Structure of Electromechanical System Based on Kriging Model

    Science.gov (United States)

    Zhang, F.; Wang, Y. M.; Chen, R. W.; Deng, W. W.; Gao, Y.

    2018-05-01

    The sealing performance of aircraft electromechanical system has a great influence on flight safety, and the reliability of its typical seal structure is analyzed by researcher. In this paper, we regard reciprocating seal structure as a research object to study structural reliability. Having been based on the finite element numerical simulation method, the contact stress between the rubber sealing ring and the cylinder wall is calculated, and the relationship between the contact stress and the pressure of the hydraulic medium is built, and the friction force on different working conditions are compared. Through the co-simulation, the adaptive Kriging model obtained by EFF learning mechanism is used to describe the failure probability of the seal ring, so as to evaluate the reliability of the sealing structure. This article proposes a new idea of numerical evaluation for the reliability analysis of sealing structure, and also provides a theoretical basis for the optimal design of sealing structure.

  19. [Reliability study in the measurement of the cusp inclination angle of a chairside digital model].

    Science.gov (United States)

    Xinggang, Liu; Xiaoxian, Chen

    2018-02-01

    This study aims to evaluate the reliability of the software Picpick in the measurement of the cusp inclination angle of a digital model. Twenty-one trimmed models were used as experimental objects. The chairside digital impression was then used for the acquisition of 3D digital models, and the software Picpick was employed for the measurement of the cusp inclination of these models. The measurements were repeated three times, and the results were compared with a gold standard, which was a manually measured experimental model cusp angle. The intraclass correlation coefficient (ICC) was calculated. The paired t test value of the two measurement methods was 0.91. The ICCs between the two measurement methods and three repeated measurements were greater than 0.9. The digital model achieved a smaller coefficient of variation (9.9%). The software Picpick is reliable in measuring the cusp inclination of a digital model.

  20. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  1. Reliability modeling of degradation of products with multiple performance characteristics based on gamma processes

    International Nuclear Information System (INIS)

    Pan Zhengqiang; Balakrishnan, Narayanaswamy

    2011-01-01

    Many highly reliable products usually have complex structure, with their reliability being evaluated by two or more performance characteristics. In certain physical situations, the degradation of these performance characteristics would be always positive and strictly increasing. In such a case, the gamma process is usually considered as a degradation process due to its independent and non-negative increments properties. In this paper, we suppose that a product has two dependent performance characteristics and that their degradation can be modeled by gamma processes. For such a bivariate degradation involving two performance characteristics, we propose to use a bivariate Birnbaum-Saunders distribution and its marginal distributions to approximate the reliability function. Inferential method for the corresponding model parameters is then developed. Finally, for an illustration of the proposed model and method, a numerical example about fatigue cracks is discussed and some computational results are presented.

  2. Inter-arch digital model vs. manual cast measurements: Accuracy and reliability.

    Science.gov (United States)

    Kiviahde, Heikki; Bukovac, Lea; Jussila, Päivi; Pesonen, Paula; Sipilä, Kirsi; Raustia, Aune; Pirttiniemi, Pertti

    2017-06-28

    The purpose of this study was to evaluate the accuracy and reliability of inter-arch measurements using digital dental models and conventional dental casts. Thirty sets of dental casts with permanent dentition were examined. Manual measurements were done with a digital caliper directly on the dental casts, and digital measurements were made on 3D models by two independent examiners. Intra-class correlation coefficients (ICC), a paired sample t-test or Wilcoxon signed-rank test, and Bland-Altman plots were used to evaluate intra- and inter-examiner error and to determine the accuracy and reliability of the measurements. The ICC values were generally good for manual and excellent for digital measurements. The Bland-Altman plots of all the measurements showed good agreement between the manual and digital methods and excellent inter-examiner agreement using the digital method. Inter-arch occlusal measurements on digital models are accurate and reliable and are superior to manual measurements.

  3. Comparative analysis among deterministic and stochastic collision damage models for oil tanker and bulk carrier reliability

    Directory of Open Access Journals (Sweden)

    A. Campanile

    2018-01-01

    Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.

  4. Modeling a Propagating Sawtooth Flare Ribbon Structure as a Tearing Mode in the Presence of Velocity Shear

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Jacob; Longcope, Dana [Department of Physics, Montana State University, Bozeman, MT 59717 (United States)

    2017-09-20

    On 2014 April 18 (SOL2014-04-18T13:03), an M-class flare was observed by IRIS. The associated flare ribbon contained a quasi-periodic sawtooth pattern that was observed to propagate along the ribbon, perpendicular to the IRIS spectral slit, with a phase velocity of ∼15 km s{sup −1}. This motion resulted in periodicities in both intensity and Doppler velocity along the slit. These periodicities were reported by Brannon et al. to be approximately ±0.″5 in position and ±20 km s{sup −1} in velocity and were measured to be ∼180° out of phase with one another. This quasi-periodic behavior has been attributed by others to bursty or patchy reconnection and slipping occurring during three-dimensional magnetic reconnection. Though able to account for periodicities in both intensity and Doppler velocity, these suggestions do not explicitly account for the phase velocity of the entire sawtooth structure or the relative phasing of the oscillations. Here we propose that the observations can be explained by a tearing mode (TM) instability occurring at a current sheet across which there is also a velocity shear. Using a linear model of this instability, we reproduce the relative phase of the oscillations, as well as the phase velocity of the sawtooth structure. We suggest a geometry and local plasma parameters for the April 18 flare that would support our hypothesis. Under this proposal, the combined spectral and spatial IRIS observations of this flare may provide the most compelling evidence to date of a TM occurring in the solar magnetic field.

  5. 2.5D S-wave velocity model of the TESZ area in northern Poland from receiver function analysis

    Science.gov (United States)

    Wilde-Piorko, Monika; Polkowski, Marcin; Grad, Marek

    2016-04-01

    Receiver function (RF) locally provides the signature of sharp seismic discontinuities and information about the shear wave (S-wave) velocity distribution beneath the seismic station. The data recorded by "13 BB Star" broadband seismic stations (Grad et al., 2015) and by few PASSEQ broadband seismic stations (Wilde-Piórko et al., 2008) are analysed to investigate the crustal and upper mantle structure in the Trans-European Suture Zone (TESZ) in northern Poland. The TESZ is one of the most prominent suture zones in Europe separating the young Palaeozoic platform from the much older Precambrian East European craton. Compilation of over thirty deep seismic refraction and wide angle reflection profiles, vertical seismic profiling in over one hundred thousand boreholes and magnetic, gravity, magnetotelluric and thermal methods allowed for creation a high-resolution 3D P-wave velocity model down to 60 km depth in the area of Poland (Grad et al. 2016). On the other hand the receiver function methods give an opportunity for creation the S-wave velocity model. Modified ray-tracing method (Langston, 1977) are used to calculate the response of the structure with dipping interfaces to the incoming plane wave with fixed slowness and back-azimuth. 3D P-wave velocity model are interpolated to 2.5D P-wave velocity model beneath each seismic station and synthetic back-azimuthal sections of receiver function are calculated for different Vp/Vs ratio. Densities are calculated with combined formulas of Berteussen (1977) and Gardner et al. (1974). Next, the synthetic back-azimuthal sections of RF are compared with observed back-azimuthal sections of RF for "13 BB Star" and PASSEQ seismic stations to find the best 2.5D S-wave models down to 60 km depth. National Science Centre Poland provided financial support for this work by NCN grant DEC-2011/02/A/ST10/00284.

  6. Hypocenter relocation along the Sunda arc in Indonesia, using a 3D seismic velocity model

    Science.gov (United States)

    Nugraha, Andri Dian; Shiddiqi, Hasbi A.; Widiyantoro, Sri; Thurber, Clifford H.; Pesicek, Jeremy D.; Zhang, Haijiang; Wiyono, Samsul H.; Ramadhan, Mohamad; Wandano,; Irsyam, Mahsyur

    2018-01-01

    The tectonics of the Sunda arc region is characterized by the junction of the Eurasian and Indo‐Australian tectonic plates, causing complex dynamics to take place. High‐seismicity rates in the Indonesian region occur due to the interaction between these tectonic plates. The availability of a denser network of seismometers after the earthquakes of Mw">Mw 9.1 in 2004 and  Mw">Mw 8.6 in 2005 supports various seismic studies, one of which regards the precise relocation of the hypocenters. In this study, hypocenter relocation was performed using a teleseismic double‐difference (DD) relocation method (teletomoDD) combining arrival times of P and S waves from stations at local, regional, and teleseismic distances. The catalog data were taken from the Agency of Meteorology, Climatology, and Geophysics (BMKG) of Indonesia, and the International Seismological Centre (ISC) for the time period of April 2009 to May 2015. The 3D seismic‐wave velocity model with a grid size 1°×1°">1°×1° was used in the travel‐time calculations. Relocation results show a reduction in travel‐time residuals compared with the initial locations. The relocation results better illuminate subducted slabs and active faults in the region such as the Mentawai back thrust and the outer rise in the subduction zone south of Java. Focal mechanisms from the Global Centroid Moment Tensor catalog are analyzed in conjunction with the relocation results, and our synthesis of the results provides further insight into seismogenesis in the region.

  7. Estimation of S-wave Velocity Structures by Using Microtremor Array Measurements for Subsurface Modeling in Jakarta

    Directory of Open Access Journals (Sweden)

    Mohamad Ridwan

    2014-12-01

    Full Text Available Jakarta is located on a thick sedimentary layer that potentially has a very high seismic wave amplification. However, the available information concerning the subsurface model and bedrock depth is insufficient for a seismic hazard analysis. In this study, a microtremor array method was applied to estimate the geometry and S-wave velocity of the sedimentary layer. The spatial autocorrelation (SPAC method was applied to estimate the dispersion curve, while the S-wave velocity was estimated using a genetic algorithm approach. The analysis of the 1D and 2D S-wave velocity profiles shows that along a north-south line, the sedimentary layer is thicker towards the north. It has a positive correlation with a geological cross section derived from a borehole down to a depth of about 300 m. The SPT data from the BMKG site were used to verify the 1D S-wave velocity profile. They show a good agreement. The microtremor analysis reached the engineering bedrock in a range from 359 to 608 m as depicted by a cross section in the north-south direction. The site class was also estimated at each site, based on the average S-wave velocity until 30 m depth. The sites UI to ISTN belong to class D (medium soil, while BMKG and ANCL belong to class E (soft soil.

  8. Development of Markov model of emergency diesel generator for dynamic reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Young Ho; Choi, Sun Yeong; Yang, Joon Eon [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-02-01

    The EDG (Emergency Diesal Generator) of nuclear power plant is one of the most important equipments in mitigating accidents. The FT (Fault Tree) method is widely used to assess the reliability of safety systems like an EDG in nuclear power plant. This method, however, has limitations in modeling dynamic features of safety systems exactly. We, hence, have developed a Markov model to represent the stochastic process of dynamic systems whose states change as time moves on. The Markov model enables us to develop a dynamic reliability model of EDG. This model can represent all possible states of EDG comparing to the FRANTIC code developed by U.S. NRC for the reliability analysis of standby systems. to access the regulation policy for test interval, we performed two simulations based on the generic data and plant specific data of YGN 3, respectively by using the developed model. We also estimate the effects of various repair rates and the fractions of starting failures by demand shock to the reliability of EDG. And finally, Aging effect is analyzed. (author). 23 refs., 19 figs., 9 tabs.

  9. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  10. Simultaneous travel time tomography for updating both velocity and reflector geometry in triangular/tetrahedral cell model

    Science.gov (United States)

    Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu

    2018-05-01

    To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.

  11. Effects of Intraluminal Thrombus on Patient-Specific Abdominal Aortic Aneurysm Hemodynamics via Stereoscopic Particle Image Velocity and Computational Fluid Dynamics Modeling

    Science.gov (United States)

    Chen, Chia-Yuan; Antón, Raúl; Hung, Ming-yang; Menon, Prahlad; Finol, Ender A.; Pekkan, Kerem

    2014-01-01

    The pathology of the human abdominal aortic aneurysm (AAA) and its relationship to the later complication of intraluminal thrombus (ILT) formation remains unclear. The hemodynamics in the diseased abdominal aorta are hypothesized to be a key contributor to the formation and growth of ILT. The objective of this investigation is to establish a reliable 3D flow visualization method with corresponding validation tests with high confidence in order to provide insight into the basic hemodynamic features for a better understanding of hemodynamics in AAA pathology and seek potential treatment for AAA diseases. A stereoscopic particle image velocity (PIV) experiment was conducted using transparent patient-specific experimental AAA models (with and without ILT) at three axial planes. Results show that before ILT formation, a 3D vortex was generated in the AAA phantom. This geometry-related vortex was not observed after the formation of ILT, indicating its possible role in the subsequent appearance of ILT in this patient. It may indicate that a longer residence time of recirculated blood flow in the aortic lumen due to this vortex caused sufficient shear-induced platelet activation to develop ILT and maintain uniform flow conditions. Additionally, two computational fluid dynamics (CFD) modeling codes (Fluent and an in-house cardiovascular CFD code) were compared with the two-dimensional, three-component velocity stereoscopic PIV data. Results showed that correlation coefficients of the out-of-plane velocity data between PIV and both CFD methods are greater than 0.85, demonstrating good quantitative agreement. The stereoscopic PIV study can be utilized as test case templates for ongoing efforts in cardiovascular CFD solver development. Likewise, it is envisaged that the patient-specific data may provide a benchmark for further studying hemodynamics of actual AAA, ILT, and their convolution effects under physiological conditions for clinical applications. PMID:24316984

  12. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    Science.gov (United States)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  13. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  14. An overview of erosion corrosion models and reliability assessment for corrosion defects in piping system

    International Nuclear Information System (INIS)

    Srividya, A.; Suresh, H.N.; Verma, A.K.; Gopika, V.; Santosh

    2006-01-01

    Piping systems are part of passive structural elements in power plants. The analysis of the piping systems and their quantification in terms of failure probability is of utmost importance. The piping systems may fail due to various degradation mechanisms like thermal fatigue, erosion-corrosion, stress corrosion cracking and vibration fatigue. On examination of previous results, erosion corrosion was more prevalent and wall thinning is a time dependent phenomenon. The paper is intended to consolidate the work done by various investigators on erosion corrosion in estimating the erosion corrosion rate and reliability predictions. A comparison of various erosion corrosion models is made. The reliability predictions based on remaining strength of corroded pipelines by wall thinning is also attempted. Variables in the limit state functions are modelled using normal distributions and Reliability assessment is carried out using some of the existing failure pressure models. A steady state corrosion rate is assumed to estimate the corrosion defect and First Order Reliability Method (FORM) is used to find the probability of failure associated with corrosion defects over time using the software for Component Reliability evaluation (COMREL). (author)

  15. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  16. Intra-observer reliability and agreement of manual and digital orthodontic model analysis.

    Science.gov (United States)

    Koretsi, Vasiliki; Tingelhoff, Linda; Proff, Peter; Kirschneck, Christian

    2018-01-23

    Digital orthodontic model analysis is gaining acceptance in orthodontics, but its reliability is dependent on the digitalisation hardware and software used. We thus investigated intra-observer reliability and agreement / conformity of a particular digital model analysis work-flow in relation to traditional manual plaster model analysis. Forty-eight plaster casts of the upper/lower dentition were collected. Virtual models were obtained with orthoX®scan (Dentaurum) and analysed with ivoris®analyze3D (Computer konkret). Manual model analyses were done with a dial caliper (0.1 mm). Common parameters were measured on each plaster cast and its virtual counterpart five times each by an experienced observer. We assessed intra-observer reliability within method (ICC), agreement/conformity between methods (Bland-Altman analyses and Lin's concordance correlation), and changing bias (regression analyses). Intra-observer reliability was substantial within each method (ICC ≥ 0.7), except for five manual outcomes (12.8 per cent). Bias between methods was statistically significant, but less than 0.5 mm for 87.2 per cent of the outcomes. In general, larger tooth sizes were measured digitally. Total difference maxilla and mandible had wide limits of agreement (-3.25/6.15 and -2.31/4.57 mm), but bias between methods was mostly smaller than intra-observer variation within each method with substantial conformity of manual and digital measurements in general. No changing bias was detected. Although both work-flows were reliable, the investigated digital work-flow proved to be more reliable and yielded on average larger tooth sizes. Averaged differences between methods were within 0.5 mm for directly measured outcomes but wide ranges are expected for some computed space parameters due to cumulative error. © The Author 2017. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  17. Comparison of Large Eddy Simulations and κ-ε Modelling of Fluid Velocity and Tracer Concentration in Impinging Jet Mixers

    Directory of Open Access Journals (Sweden)

    Wojtas Krzysztof

    2015-06-01

    Full Text Available Simulations of turbulent mixing in two types of jet mixers were carried out using two CFD models, large eddy simulation and κ-ε model. Modelling approaches were compared with experimental data obtained by the application of particle image velocimetry and planar laser-induced fluorescence methods. Measured local microstructures of fluid velocity and inert tracer concentration can be used for direct validation of numerical simulations. Presented results show that for higher tested values of jet Reynolds number both models are in good agreement with the experiments. Differences between models were observed for lower Reynolds numbers when the effects of large scale inhomogeneity are important.

  18. Model of the seismic velocity distribution in the upper lithosphere of the Vrancea seismogenic zone and within the adjacent areas

    International Nuclear Information System (INIS)

    Raileanu, Victor; Bala, Andrei

    2002-01-01

    The task of this project is to perform a detailed seismic velocity model of the P waves in the crust and upper mantle crossed by the VRANCEA 2001 seismic line and to interpret it in structural terms. The velocity model aims to contribute to a new geodynamical model of the Eastern Carpathians evolution and to a better understanding of the causes of the Vrancea earthquakes. It is performed in cooperation with the University of Karlsruhe, Germany, and University of Bucharest. The Project will be completed in 5 working stages. Vrancea 2001 is the name of the seismic line recorded with about 780 seismic instruments deployed over more then 600 km length from eastern part of Romania (east Tulcea) through Vrancea area to Aiud and south Oradea. 10 big shots with charges from 300 kg to 1500 kg dynamite were detonated along seismic line. Field data quality is from good to very good and it provides information down to the upper mantle levels. Processing of data has been performed in the first stage of present project and it consisted in merging of all individual field records in seismograms for each shotpoint. Almost 800 individual records for each out of the 10 shots were merged in 10 seismograms with about 800 channels. A seismogram of shot point S (25 km NE of Ramnicu Sarat) is given. It is visible a high energy generated by shotpoint S. Pn wave can be traced until the western end of seismic line, about 25 km from source. In the second stage of project an interpretation of seismic data is achieved for the first 5 seismograms from the eastern half of seismic line, from Tulcea to Ramnicu Sarat. It is used a forward modeling procedure. 5 unidimensional (1D) velocity-depth function models are obtained. P wave velocity-depth function models for shotpoints from O to T are presented. Velocity-depth information is extended down to 40 km for shot R and 80 km for shot S. It should noticed the unusually high velocities at the shallow levels for Dobrogea area (O and P shots) and the

  19. A simulation model for reliability evaluation of Space Station power systems

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kumar, Mudit; Wagner, H.

    1988-01-01

    A detailed simulation model for the hybrid Space Station power system is presented which allows photovoltaic and solar dynamic power sources to be mixed in varying proportions. The model considers the dependence of reliability and storage characteristics during the sun and eclipse periods, and makes it possible to model the charging and discharging of the energy storage modules in a relatively accurate manner on a continuous basis.

  20. Modeling reliability of power systems substations by using stochastic automata networks

    International Nuclear Information System (INIS)

    Šnipas, Mindaugas; Radziukynas, Virginijus; Valakevičius, Eimutis

    2017-01-01

    In this paper, stochastic automata networks (SANs) formalism to model reliability of power systems substations is applied. The proposed strategy allows reducing the size of state space of Markov chain model and simplifying system specification. Two case studies of standard configurations of substations are considered in detail. SAN models with different assumptions were created. SAN approach is compared with exact reliability calculation by using a minimal path set method. Modeling results showed that total independence of automata can be assumed for relatively small power systems substations with reliable equipment. In this case, the implementation of Markov chain model by a using SAN method is a relatively easy task. - Highlights: • We present the methodology to apply stochastic automata network formalism to create Markov chain models of power systems. • The stochastic automata network approach is combined with minimal path sets and structural functions. • Two models of substation configurations with different model assumptions are presented to illustrate the proposed methodology. • Modeling results of system with independent automata and functional transition rates are similar. • The conditions when total independence of automata can be assumed are addressed.

  1. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Science.gov (United States)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  2. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  3. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  4. The application of cognitive models to the evaluation and prediction of human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.; Reason, J.T.

    1986-01-01

    The first section of the paper provides a brief overview of a number of important principles relevant to human reliability modeling that have emerged from cognitive models, and presents a synthesis of these approaches in the form of a Generic Error Modeling System (GEMS). The next section illustrates the application of GEMS to some well known nuclear power plant (NPP) incidents in which human error was a major contributor. The way in which design recommendations can emerge from analyses of this type is illustrated. The third section describes the use of cognitive models in the classification of human errors for prediction and data collection purposes. The final section addresses the predictive modeling of human error as part of human reliability assessment in Probabilistic Risk Assessment

  5. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  6. Regional travel-time residual studies and station correction from 1-D velocity models for some stations around Peninsular Malaysia and Singapore

    Science.gov (United States)

    Osagie, Abel U.; Nawawi, Mohd.; Khalil, Amin Esmail; Abdullah, Khiruddin

    2017-06-01

    can compensate for heterogeneous velocity structure near individual stations. The computed average travel-time residuals can reduce errors attributable to station correction in the inversion of hypocentral parameters around the Peninsula. Due to the heterogeneity occasioned by the numerous fault systems, a better 1-D velocity model for the Peninsula is desired for more reliable hypocentral inversion and other seismic investigations.

  7. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system.

    Science.gov (United States)

    Janson, Natalia B; Marsden, Christopher J

    2017-12-05

    It is well known that architecturally the brain is a neural network, i.e. a collection of many relatively simple units coupled flexibly. However, it has been unclear how the possession of this architecture enables higher-level cognitive functions, which are unique to the brain. Here, we consider the brain from the viewpoint of dynamical systems theory and hypothesize that the unique feature of the brain, the self-organized plasticity of its architecture, could represent the means of enabling the self-organized plasticity of its velocity vector field. We propose that, conceptually, the principle of cognition could amount to the existence of appropriate rules governing self-organization of the velocity field of a dynamical system with an appropriate account of stimuli. To support this hypothesis, we propose a simple non-neuromorphic mathematical model with a plastic self-organized velocity field, which has no prototype in physical world. This system is shown to be capable of basic cognition, which is illustrated numerically and with musical data. Our conceptual model could provide an additional insight into the working principles of the brain. Moreover, hardware implementations of plastic velocity fields self-organizing according to various rules could pave the way to creating artificial intelligence of a novel type.

  8. A one-dimensional model to describe flow localization in viscoplastic slender bars subjected to super critical impact velocities

    Science.gov (United States)

    Vaz-Romero, A.; Rodríguez-Martínez, J. A.

    2018-01-01

    In this paper we investigate flow localization in viscoplastic slender bars subjected to dynamic tension. We explore loading rates above the critical impact velocity: the wave initiated in the impacted end by the applied velocity is the trigger for the localization of plastic deformation. The problem has been addressed using two kinds of numerical simulations: (1) one-dimensional finite difference calculations and (2) axisymmetric finite element computations. The latter calculations have been used to validate the capacity of the finite difference model to describe plastic flow localization at high impact velocities. The finite difference model, which highlights due to its simplicity, allows to obtain insights into the role played by the strain rate and temperature sensitivities of the material in the process of dynamic flow localization. Specifically, we have shown that viscosity can stabilize the material behavior to the point of preventing the appearance of the critical impact velocity. This is a key outcome of our investigation, which, to the best of the authors' knowledge, has not been previously reported in the literature.

  9. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  10. A Review of the Progress with Statistical Models of Passive Component Reliability

    Directory of Open Access Journals (Sweden)

    Bengt O.Y. Lydell

    2017-03-01

    Full Text Available During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  11. A review of the progress with statistical models of passive component reliability

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, Bengt O. Y. [Sigma-Phase Inc., Vail (United States)

    2017-03-15

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  12. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  13. Reliability model for helicopter main gearbox lubrication system using influence diagrams

    International Nuclear Information System (INIS)

    Rashid, H.S.J.; Place, C.S.; Mba, D.; Keong, R.L.C.; Healey, A.; Kleine-Beek, W.; Romano, M.

    2015-01-01

    The loss of oil from a helicopter main gearbox (MGB) leads to increased friction between components, a rise in component surface temperatures, and subsequent mechanical failure of gearbox components. A number of significant helicopter accidents have been caused due to such loss of lubrication. This paper presents a model to assess the reliability of helicopter MGB lubricating systems. Safety risk modeling was conducted for MGB oil system related accidents in order to analyse key failure mechanisms and the contributory factors. Thus, the dominant failure modes for lubrication systems and key contributing components were identified. The Influence Diagram (ID) approach was then employed to investigate reliability issues of the MGB lubrication systems at the level of primary causal factors, thus systematically investigating a complex context of events, conditions, and influences that are direct triggers of the helicopter MGB lubrication system failures. The interrelationships between MGB lubrication system failure types were thus identified, and the influence of each of these factors on the overall MGB lubrication system reliability was assessed. This paper highlights parts of the HELMGOP project, sponsored by the European Aviation Safety Agency to improve helicopter main gearbox reliability. - Highlights: • We investigated methods to optimize helicopter MGB oil system run-dry capability. • Used Influence Diagram to assess design and maintenance factors of MGB oil system. • Factors influencing overall MGB lubrication system reliability were identified. • This globally influences current and future helicopter MGB designs

  14. A review of the progress with statistical models of passive component reliability

    International Nuclear Information System (INIS)

    Lydell, Bengt O. Y.

    2017-01-01

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models

  15. A novel model and behavior analysis for a swarm of multi-agent systems with finite velocity

    International Nuclear Information System (INIS)

    Wang Liang-Shun; Wu Zhi-Hai

    2014-01-01

    Inspired by the fact that in most existing swarm models of multi-agent systems the velocity of an agent can be infinite, which is not in accordance with the real applications, we propose a novel swarm model of multi-agent systems where the velocity of an agent is finite. The Lyapunov function method and LaSalle's invariance principle are employed to show that by using the proposed model all of the agents eventually enter into a bounded region around the swarm center and finally tend to a stationary state. Numerical simulations are provided to demonstrate the effectiveness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  16. Shear velocity model for the westernmost Mediterranean from ambient noise and ballistic finite-frequency Rayleigh wave tomography

    Science.gov (United States)

    Palomeras, I.; Villasenor, A.; Thurner, S.; Levander, A.; Gallart, J.; Harnafi, M.

    2014-12-01

    The westernmost Mediterranean comprises the Iberian Peninsula and Morocco, separated by the Alboran Sea and the Algerian Basin. From north to south this region consists of the Pyrenees, resulting from Iberia-Eurasia collision; the Iberian Massif, which has been undeformed since the end of the Paleozoic; the Central System and Iberian Chain, regions with intracontinental Oligocene-Miocene deformation; the Gibraltar Arc (Betics, Rif and Alboran terranes), resulting from post-Oligocene subduction roll-back; and the Atlas Mountains. We analyzed data from recent broad-band array deployments and permanent stations in the area (IberArray and Siberia arrays, the PICASSO array, the University of Munster array, and the Spanish, Portuguese and Moroccan National Networks) to characterize its lithospheric structure. The combined array of 350 stations has an average interstation spacing of ~60 km. We calculated the Rayleigh waves phase velocities from ambient noise (periods 4 to 40 s) and teleseismic events (periods 20 to 167 s). We inverted the phase velocities to obtain a shear velocity model for the lithosphere to ~200 km depth. Our results correlate well with the surface expression of the main structural units with higher crustal velocity for the Iberian Massif than for the Alpine Iberia and Atlas Mountains. The Gibraltar Arc has lower crustal shear velocities than the regional average at all crustal depths. It also shows an arc shaped anomaly with high upper mantle velocities (>4.6 km/s) at shallow depths (volcanic fields in Iberia and Morocco, indicative of high temperatures at relatively shallow depths, and suggesting that the lithosphere has been removed beneath these areas.

  17. Tomography of core-mantle boundary and lowermost mantle coupled by geodynamics: joint models of shear and compressional velocity

    Directory of Open Access Journals (Sweden)

    Gaia Soldati

    2015-03-01

    Full Text Available We conduct joint tomographic inversions of P and S travel time observations to obtain models of delta v_P  and delta v_S in the entire mantle. We adopt a recently published method which takes into account the geodynamic coupling between mantle heterogeneity and core-mantle boundary (CMB topography by viscous flow, where sensitivity of the seismic travel times to the CMB is accounted for implicitly in the inversion (i.e. the CMB topography is not explicitly inverted for. The seismic maps of the Earth's mantle and CMB topography that we derive can explain the inverted seismic data while being physically consistent with each other. The approach involved scaling P-wave velocity (more sensitive to the CMB to density anomalies, in the assumption that mantle heterogeneity has a purely thermal origin, so that velocity and density heterogeneity are proportional to one another. On the other hand, it has sometimes been suggested that S-wave velocity might be more directly sensitive to temperature, while P heterogeneity is more strongly influenced by chemical composition. In the present study, we use only S-, and not P-velocity, to estimate density heterogeneity through linear scaling, and hence the sensitivity of core-reflected P phases to mantle structure. Regardless of whether density is more closely related to P- or S-velocity, we think it is worthwhile to explore both scaling approaches in our efforts to explain seismic data. The similarity of the results presented in this study to those obtained by scaling P-velocity to density suggests that compositional anomaly has a limited impact on viscous flow in the deep mantle.

  18. Predicting the peak growth velocity in the individual child: validation of a new growth model.

    NARCIS (Netherlands)

    Busscher, I.; Kingma, I.; de Bruin, R.; Wapstra, F.H.; Verkerke, G.J.; Veldhuizen, A.G.

    2012-01-01

    Predicting the peak growth velocity in an individual patient with adolescent idiopathic scoliosis is essential or determining the prognosis of the disorder and timing of the (surgical) treatment. Until the present time, no accurate method has been found to predict the timing and magnitude of the

  19. Predicting the peak growth velocity in the individual child : validation of a new growth model

    NARCIS (Netherlands)

    Busscher, Iris; Kingma, Idsart; de Bruin, Rob; Wapstra, Frits Hein; Verkerke, Gijsvertus J.; Veldhuizen, Albert G.

    Predicting the peak growth velocity in an individual patient with adolescent idiopathic scoliosis is essential or determining the prognosis of the disorder and timing of the (surgical) treatment. Until the present time, no accurate method has been found to predict the timing and magnitude of the

  20. Predicting the peak growth velocity in the individual child: validation of a new growth model

    NARCIS (Netherlands)

    Busscher, I.; Kingma, I.; Bruin, R.; Wapstra, F.H.; Verkerke, Gijsbertus Jacob; Veldhuizen, A.G.

    2012-01-01

    Predicting the peak growth velocity in an individual patient with adolescent idiopathic scoliosis is essential or determining the prognosis of the disorder and timing of the (surgical) treatment. Until the present time, no accurate method has been found to predict the timing and magnitude of the

  1. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    Energy Technology Data Exchange (ETDEWEB)

    Pegg, E.C., E-mail: elise.pegg@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Mellon, S.J., E-mail: stephen.mellon@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Salmon, G. [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Alvand, A., E-mail: abtin.alvand@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Pandit, H., E-mail: hemant.pandit@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Murray, D.W., E-mail: david.murray@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Gill, H.S., E-mail: richie.gill@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom)

    2012-10-15

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements.

  2. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    International Nuclear Information System (INIS)

    Pegg, E.C.; Mellon, S.J.; Salmon, G.; Alvand, A.; Pandit, H.; Murray, D.W.; Gill, H.S.

    2012-01-01

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements

  3. Reliability Analysis of an Extended Shock Model and Its Optimization Application in a Production Line

    Directory of Open Access Journals (Sweden)

    Renbin Liu

    2014-01-01

    some important reliability indices are derived, such as availability, failure frequency, mean vacation period, mean renewal cycle, mean startup period, and replacement frequency. Finally, a production line controlled by two cold-standby computers is modeled to present numerical illustration and its optimal part-time job policy at a maximum profit.

  4. Role of frameworks, models, data, and judgment in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hannaman, G W

    1986-05-01

    Many advancements in the methods for treating human interactions in PRA studies have occurred in the last decade. These advancements appear to increase the capability of PRAs to extend beyond just the assessment of the human's importance to safety. However, variations in the application of these advanced models, data, and judgements in recent PRAs make quantitative comparisons among studies extremely difficult. This uncertainty in the analysis diminishes the usefulness of the PRA study for upgrading procedures, enhancing traning, simulator design, technical specification guidance, and for aid in designing the man-machine interface. Hence, there is a need for a framework to guide analysts in incorporating human interactions into the PRA systems analyses so that future users of a PRA study will have a clear understanding of the approaches, models, data, and assumptions which were employed in the initial study. This paper describes the role of the systematic human action reliability procedure (SHARP) in providing a road map through the complex terrain of human reliability that promises to improve the reproducibility of such analysis in the areas of selecting the models, data, representations, and assumptions. Also described is the role that a human cognitive reliability model can have in collecting data from simulators and helping analysts assign human reliability parameters in a PRA study. Use of these systematic approaches to perform or upgrade existing PRAs promises to make PRA studies more useful as risk management tools.

  5. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    Science.gov (United States)

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  6. A reliability model for interlayer dielectric cracking during fast thermal cycling

    NARCIS (Netherlands)

    Nguyen, Van Hieu; Salm, Cora; Krabbenborg, B.H.; Krabbenborg, B.H.; Bisschop, J.; Mouthaan, A.J.; Kuper, F.G.; Ray, Gary W.; Smy, Tom; Ohta, Tomohiro; Tsujimura, Manabu

    2003-01-01

    Interlayer dielectric (ILD) cracking can result in short circuits of multilevel interconnects. This paper presents a reliability model for ILD cracking induced by fast thermal cycling (FTC) stress. FTC tests have been performed under different temperature ranges (∆T) and minimum temperatures (Tmin).

  7. Electromechanical wave imaging and electromechanical wave velocity estimation in a large animal model of myocardial infarction

    Science.gov (United States)

    Costet, Alexandre; Melki, Lea; Sayseng, Vincent; Hamid, Nadira; Nakanishi, Koki; Wan, Elaine; Hahn, Rebecca; Homma, Shunichi; Konofagou, Elisa

    2017-12-01

    Echocardiography is often used in the clinic for detection and characterization of myocardial infarction. Electromechanical wave imaging (EWI) is a non-invasive ultrasound-based imaging technique based on time-domain incremental motion and strain estimation that can evaluate changes in contractility in the heart. In this study, electromechanical activation is assessed in infarcted heart to determine whether EWI is capable of detecting and monitoring infarct formation. Additionally, methods for estimating electromechanical wave (EW) velocity are presented, and changes in the EW propagation velocity after infarct formation are studied. Five (n  =  5) adult mongrels were used in this study. Successful infarct formation was achieved in three animals by ligation of the left anterior descending (LAD) coronary artery. Dogs were survived for a few days after LAD ligation and monitored daily with EWI. At the end of the survival period, dogs were sacrificed and TTC (tetrazolium chloride) staining confirmed the formation and location of the infarct. In all three dogs, as soon as day 1 EWI was capable of detecting late-activated and non-activated regions, which grew over the next few days. On final day images, the extent of these regions corresponded to the location of infarct as confirmed by staining. EW velocities in border zones of infarct were significantly lower post-infarct formation when compared to baseline, whereas velocities in healthy tissues were not. These results indicate that EWI and EW velocity might help with the detection of infarcts and their border zones, which may be useful for characterizing arrhythmogenic substrate.

  8. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  9. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  10. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  11. Reliability of a new biokinetic model of zirconium in internal dosimetry: part I, parameter uncertainty analysis.

    Science.gov (United States)

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a

  12. A Combined Reliability Model of VSC-HVDC Connected Offshore Wind Farms Considering Wind Speed Correlation

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    and WTGs outage. The wind speed correlation between different WFs is included in the two-dimensional multistate WF model by using an improved k-means clustering method. Then, the entire system with two WFs and a threeterminal VSC-HVDC system is modeled as a multi-state generation unit. The proposed model...... is applied to the Roy Billinton test system (RBTS) for adequacy studies. Both the probability and frequency indices are calculated. The effectiveness and accuracy of the combined model is validated by comparing results with the sequential Monte Carlo simulation (MCS) method. The effects of the outage of VSC-HVDC...... system and wind speed correlation on the system reliability were analyzed. Sensitivity analyses were conducted to investigate the impact of repair time of the offshore VSC-HVDC system on system reliability....

  13. Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology

    International Nuclear Information System (INIS)

    Knezevic, J.; Odoom, E.R.

    2001-01-01

    A methodology is developed which uses Petri nets instead of the fault tree methodology and solves for reliability indices utilising fuzzy Lambda-Tau method. Fuzzy set theory is used for representing the failure rate and repair time instead of the classical (crisp) set theory because fuzzy numbers allow expert opinions, linguistic variables, operating conditions, uncertainty and imprecision in reliability information to be incorporated into the system model. Petri nets are used because unlike the fault tree methodology, the use of Petri nets allows efficient simultaneous generation of minimal cut and path sets

  14. Model reliability and software quality assurance in simulation of nuclear fuel waste management systems

    International Nuclear Information System (INIS)

    Oeren, T.I.; Elzas, M.S.; Sheng, G.; Wageningen Agricultural Univ., Netherlands; McMaster Univ., Hamilton, Ontario)

    1985-01-01

    As is the case with all scientific simulation studies, computerized simulation of nuclear fuel waste management systems can introduce and hide various types of errors. Frameworks to clarify issues of model reliability and software quality assurance are offered. Potential problems with reference to the main areas of concern for reliability and quality are discussed; e.g., experimental issues, decomposition, scope, fidelity, verification, requirements, testing, correctness, robustness are treated with reference to the experience gained in the past. A list comprising over 80 most common computerization errors is provided. Software tools and techniques used to detect and to correct computerization errors are discussed

  15. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  16. Phd study of reliability and validity: One step closer to a standardized music therapy assessment model

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl

    The paper will present a phd study concerning reliability and validity of music therapy assessment model “Assessment of Parenting Competences” (APC) in the area of families with emotionally neglected children. This study had a multiple strategy design with a philosophical base of critical realism...... and pragmatism. The fixed design for this study was a between and within groups design in testing the APCs reliability and validity. The two different groups were parents with neglected children and parents with non-neglected children. The flexible design had a multiple case study strategy specifically...

  17. Assessment of Electronic Circuits Reliability Using Boolean Truth Table Modeling Method

    International Nuclear Information System (INIS)

    EI-Shanshoury, A.I.

    2011-01-01

    This paper explores the use of Boolean Truth Table modeling Method (BTTM) in the analysis of qualitative data. It is widely used in certain fields especially in the fields of electrical and electronic engineering. Our work focuses on the evaluation of power supply circuit reliability using (BTTM) which involves systematic attempts to falsify and identify hypotheses on the basis of truth tables constructed from qualitative data. Reliability parameters such as the system's failure rates for the power supply case study are estimated. All possible state combinations (operating and failed states) of the major components in the circuit were listed and their effects on overall system were studied

  18. Lithospheric structure of the westernmost Mediterranean inferred from finite frequency Rayleigh wave tomography S-velocity model.

    Science.gov (United States)

    Palomeras, Imma; Villasenor, Antonio; Thurner, Sally; Levander, Alan; Gallart, Josep; Harnafi, Mimoun

    2016-04-01

    The Iberian Peninsula and Morocco, separated by the Alboran Sea and the Algerian Basin, constitute the westernmost Mediterranean. From north to south this region consists of the Pyrenees, the result of interaction between the Iberian and Eurasian plates; the Iberian Massif, a region that has been undeformed since the end of the Paleozoic; the Central System and Iberian Chain, regions with intracontinental Oligocene-Miocene deformation; the Gibraltar Arc (Betics, Rif and Alboran terranes) and the Atlas Mountains, resulting from post-Oligocene subduction roll-back and Eurasian-Nubian plate convergence. In this study we analyze data from recent broad-band array deployments and permanent stations on the Iberian Peninsula and in Morocco (Spanish IberArray and Siberia arrays, the US PICASSO array, the University of Munster array, and the Spanish, Portuguese, and Moroccan National Networks) to characterize its lithospheric structure. The combined array of 350 stations has an average interstation spacing of ~60 km, comparable to USArray. We have calculated the Rayleigh waves phase velocities from ambient noise for short periods (4 s to 40 s) and teleseismic events for longer periods (20 s to 167 s). We inverted the phase velocities to obtain a shear velocity model for the lithosphere to ~200 km depth. The model shows differences in the crust for the different areas, where the highest shear velocities are mapped in the Iberian Massif crust. The crustal thickness is highly variable ranging from ~25 km beneath the eastern Betics to ~55km beneath the Gibraltar Strait, Internal Betics and Internal Rif. Beneath this region a unique arc shaped anomaly with high upper mantle velocities (>4.6 km/s) at shallow depths (volcanic fields in Iberia and Morocco, indicative of high temperatures at relatively shallow depths, and suggesting that the lithosphere has been removed beneath these areas

  19. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    Science.gov (United States)

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  20. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    Science.gov (United States)

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS