Sample records for reliable consistent results

  1. Reliability and Consistency of Surface Contamination Measurements

    Rouppert, F.; Rivoallan, A.; Largeron, C.


    Surface contamination evaluation is a tough problem since it is difficult to isolate the radiations emitted by the surface, especially in a highly irradiating atmosphere. In that case the only possibility is to evaluate smearable (removeable) contamination since ex-situ countings are possible. Unfortunately, according to our experience at CEA, these values are not consistent and thus non relevant. In this study, we show, using in-situ Fourier Transform Infra Red spectrometry on contaminated metal samples, that fixed contamination seems to be chemisorbed and removeable contamination seems to be physisorbed. The distribution between fixed and removeable contamination appears to be variable. Chemical equilibria and reversible ion exchange mechanisms are involved and are closely linked to environmental conditions such as humidity and temperature. Measurements of smearable contamination only give an indication of the state of these equilibria between fixed and removeable contamination at the time and in the environmental conditions the measurements were made.

  2. Internal consistency reliability is a poor predictor of responsiveness

    Heels-Ansdell Diane


    Full Text Available Abstract Background Whether responsiveness represents a measurement property of health-related quality of life (HRQL instruments that is distinct from reliability and validity is an issue of debate. We addressed the claims of a recent study, which suggested that investigators could rely on internal consistency to reflect instrument responsiveness. Methods 516 patients with chronic obstructive pulmonary disease or knee injury participating in four longitudinal studies completed generic and disease-specific HRQL questionnaires before and after an intervention that impacted on HRQL. We used Pearson correlation coefficients and linear regression to assess the relationship between internal consistency reliability (expressed as Cronbach's alpha, instrument type (generic and disease-specific and responsiveness (expressed as the standardised response mean, SRM. Results Mean Cronbach's alpha was 0.83 (SD 0.08 and mean SRM was 0.59 (SD 0.33. The correlation between Cronbach's alpha and SRMs was 0.10 (95% CI -0.12 to 0.32 across all studies. Cronbach's alpha alone did not explain variability in SRMs (p = 0.59, r2 = 0.01 whereas the type of instrument was a strong predictor of the SRM (p = 0.012, r2 = 0.37. In multivariable models applied to individual studies Cronbach's alpha consistently failed to predict SRMs (regression coefficients between -0.45 and 1.58, p-values between 0.15 and 0.98 whereas the type of instrument did predict SRMs (regression coefficients between -0.25 to -0.59, p-values between Conclusion Investigators must look to data other than internal consistency reliability to select a responsive instrument for use as an outcome in clinical trials.

  3. Improving risk assessment by defining consistent and reliable system scenarios

    B. Mazzorana


    Full Text Available During the entire procedure of risk assessment for hydrologic hazards, the selection of consistent and reliable scenarios, constructed in a strictly systematic way, is fundamental for the quality and reproducibility of the results. However, subjective assumptions on relevant impact variables such as sediment transport intensity on the system loading side and weak point response mechanisms repeatedly cause biases in the results, and consequently affect transparency and required quality standards. Furthermore, the system response of mitigation measures to extreme event loadings represents another key variable in hazard assessment, as well as the integral risk management including intervention planning. Formative Scenario Analysis, as a supplement to conventional risk assessment methods, is a technique to construct well-defined sets of assumptions to gain insight into a specific case and the potential system behaviour. By two case studies, carried out (1 to analyse sediment transport dynamics in a torrent section equipped with control measures, and (2 to identify hazards induced by woody debris transport at hydraulic weak points, the applicability of the Formative Scenario Analysis technique is presented. It is argued that during scenario planning in general and with respect to integral risk management in particular, Formative Scenario Analysis allows for the development of reliable and reproducible scenarios in order to design more specifically an application framework for the sustainable assessment of natural hazards impact. The overall aim is to optimise the hazard mapping and zoning procedure by methodologically integrating quantitative and qualitative knowledge.

  4. UFMG Sydenham's chorea rating scale (USCRS): reliability and consistency.

    Teixeira, Antônio Lúcio; Maia, Débora P; Cardoso, Francisco


    Despite the renewed interest in Sydenham's chorea (SC) in recent years, there were no valid and reliable scales to rate the several signs and symptoms of patients with SC and related disorders. The Universidade Federal de Minas Gerais (UFMG) Sydenham's Chorea Rating Scale (USCRS) was designed to provide a detailed quantitative description of the performance of activities of daily living, behavioral abnormalities, and motor function of subjects with SC. The scale comprises 27 items and each one is scored from 0 (no symptom or sign) to 4 (severe disability or finding). Data from 84 subjects, aged 4.9 to 33.6 years, support the interrater reliability and internal consistency of the scale. The USCRS is a promising instrument for rating the clinical features of SC as well as their functional impact in children and adults.

  5. "A Comparison of Consensus, Consistency, and Measurement Approaches to Estimating Interrater Reliability"

    Steven E. Stemler


    Full Text Available This article argues that the general practice of describing interrater reliability as a single, unified concept best imprecise, and at worst potentially misleading. Rather than representing a single concept, different..statistical methods for computing interrater reliability can be more accurately classified into one of three..categories based upon the underlying goals of analysis. The three general categories introduced and..described in this paper are: 1 consensus estimates, 2 consistency estimates, and 3 measurement estimates...The assumptions, interpretation, advantages, and disadvantages of estimates from each of these three..categories are discussed, along with several popular methods of computing interrater reliability coefficients..that fall under the umbrella of consensus, consistency, and measurement estimates. Researchers and..practitioners should be aware that different approaches to estimating interrater reliability carry with them..different implications for how ratings across multiple judges should be summarized, which may impact the..validity of subsequent study results.


    Sun Youchao; Shi Jun


    The reliability assessment of unit-system near two levels is the most important content in the reliability multi-level synthesis of complex systems. Introducing the information theory into system reliability assessment, using the addible characteristic of information quantity and the principle of equivalence of information quantity, an entropy method of data information conversion is presented for the system consisted of identical exponential units. The basic conversion formulae of entropy method of unit test data are derived based on the principle of information quantity equivalence. The general models of entropy method synthesis assessment for system reliability approximate lower limits are established according to the fundamental principle of the unit reliability assessment. The applications of the entropy method are discussed by way of practical examples. Compared with the traditional methods, the entropy method is found to be valid and practicable and the assessment results are very satisfactory.

  7. A reliable p ower management scheme for consistent hashing based distributed key value storage systems#

    Nan-nan ZHAO; Ji-guang WAN; Jun WANG; Chang-sheng XIE


    Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing their power consumption. In this paper, we propose GreenCHT, a reliable power management scheme for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme, a reliable distributed log store, and a predictive power mode scheduler (PMS). Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, we arrange the replicas of objects on nonoverlapping tiers of nodes in the ring. This allows the system to fall in various power modes by powering down subsets of servers while not violating data availability. The predictive PMS predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. To ensure that the reliability of the system is maintained when replicas are powered down, we distribute the writes to standby replicas to active servers, which ensures failure tolerance of the system. GreenCHT is implemented based on Sheepdog, a distributed key value storage system that uses consistent hashing as an underlying distributed hash table. By replaying 12 typical real workload traces collected from Microsoft, the evaluation results show that GreenCHT can provide significant power savings while maintaining a desired performance. We observe that GreenCHT can reduce power consumption by up to 35%–61%.

  8. Internal consistency reliability of the WISC-IV among primary school students'.

    Ryan, Joseph J; Glass, Laura A; Bartels, Jared M


    Internal consistency reliabilities of the WISC-IV subtest and index scores were estimated for a sample of 76 primary school students from a small Midwestern community. Means for age and Full Scale IQ were 8.2 yr. (SD = 23) and 110.5 (SD = 11.7), respectively. Internal consistency reliabilities were compared with those for the WISC-IV standardization sample of 200. The range of reliabilities for the subtests was from .76 for Picture Concepts to .94 for Arithmetic and from .92 for Perceptual Reasoning Index to .96 for Verbal Comprehension Index and Full Scale IQ. The Full Scale IQ internal consistency reliability is comparable to that of the standardization sample. However, in all but one instance the reliabilities were greater than those of the normative sample.

  9. Reliability, Dimensionality, and Internal Consistency as Defined by Cronbach: Distinct Albeit Related Concepts

    Davenport, Ernest C.; Davison, Mark L.; Liou, Pey-Yan; Love, Quintin U.


    This article uses definitions provided by Cronbach in his seminal paper for coefficient a to show the concepts of reliability, dimensionality, and internal consistency are distinct but interrelated. The article begins with a critique of the definition of reliability and then explores mathematical properties of Cronbach's a. Internal consistency…

  10. Content validation: clarity/relevance, reliability and internal consistency of enunciative signs of language acquisition.

    Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de


    To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.

  11. Structural Consistency: Enabling XML Keyword Search to Eliminate Spurious Results Consistently

    Lee, Ki-Hoon; Han, Wook-Shin; Kim, Min-Soo


    XML keyword search is a user-friendly way to query XML data using only keywords. In XML keyword search, to achieve high precision without sacrificing recall, it is important to remove spurious results not intended by the user. Efforts to eliminate spurious results have enjoyed some success by using the concepts of LCA or its variants, SLCA and MLCA. However, existing methods still could find many spurious results. The fundamental cause for the occurrence of spurious results is that the existing methods try to eliminate spurious results locally without global examination of all the query results and, accordingly, some spurious results are not consistently eliminated. In this paper, we propose a novel keyword search method that removes spurious results consistently by exploiting the new concept of structural consistency.

  12. Preliminary reliability and internal consistency of the Wheelchair Components Questionnaire for Condition.

    Rispin, Karen; Dittmer, Melanie; McLean, Jessica; Wee, Joy


    Wheelchair durability and maintenance condition are key factors of wheelchair function. Durability studies done with double drum and drop testers, although valuable, do not perfectly imitate conditions of use. Durability may be harvested from clinical records; however, these may be inconsistent because protocols for recording information differ from place to place. Wheelchair professionals with several years of experience often develop a good eye for wheelchair maintenance condition. The Wheelchair Components Questionnaire for Condition (WCQc) was developed as a professional report questionnaire to provide data specifically on the maintenance condition of a wheelchair. The goal of this study was to obtain preliminary test-retest reliability and internal consistency for the WCQc. Participants were a convenience sample of wheelchair professionals who self-reported more than two years' of wheelchair experience, and completed the WCQc on the same wheelchair twice. Results indicated preliminary reliability and internal consistency for domain related questions and the entire questionnaire. Implications for rehabilitation The WCQc, if administered routinely at regular intervals, can be used to monitor wheelchair condition and alert users and health professionals about the need for repair or replacement. The WCQc is not difficult to use, making early monitoring for wear or damage more feasible. The earlier a tool can detect need for maintenance, the higher likelihood that appropriate measures may be employed in a timely fashion to maximize the overall durability of wheelchairs and minimize clinical complications. Keeping wheelchairs appropriately maintained allows users to minimize effort expended when using them, and maximize their function. It also lowers the risk of injury due to component failure. When assessing groups of similar wheelchairs, organizations involved in funding wheelchairs can use data from the WCQc to make purchase decisions based on durability, and

  13. Assessment of disabilities in stroke patients with apraxia: internal consistency and inter-observer reliability.

    Heugten, C.M. van; Dekker, J.; Deelman, B.G.; Stehmann-Saris, J.C.; Kinebanian, A.


    In this paper the internal consistency and inter-observer reliability of the assessment of disabilities in stroke patients with apraxia is presented. Disabilities were assessed by means of observation of activities of daily living (ADL). The study was conducted at occupational therapy departments in

  14. Assessment of disabilities in stroke patients with apraxia : Internal consistency and inter-observer reliability

    van Heugten, CM; Dekker, J; Deelman, BG; Stehmann-Saris, JC; Kinebanian, A


    In this paper the internal consistency and inter-observer reliability of the assessment of disabilities in stroke patients with apraxia is presented. Disabilities were assessed by means of observation of activities of daily living (ADL). The study was conducted at occupational therapy departments in

  15. Assessment of disabilities in stroke patients with apraxia : Internal consistency and inter-observer reliability

    van Heugten, CM; Dekker, J; Deelman, BG; Stehmann-Saris, JC; Kinebanian, A


    In this paper the internal consistency and inter-observer reliability of the assessment of disabilities in stroke patients with apraxia is presented. Disabilities were assessed by means of observation of activities of daily living (ADL). The study was conducted at occupational therapy departments in

  16. Towards consistent and reliable Dutch and international energy statistics for the chemical industry

    Neelis, M.L.; Pouwelse, J.W.


    Consistent and reliable energy statistics are of vital importance for proper monitoring of energy-efficiency policies. In recent studies, irregularities have been reported in the Dutch energy statistics for the chemical industry. We studied in depth the company data that form the basis of the energy

  17. Corrections for criterion reliability in validity generalization: The consistency of Hermes, the utility of Midas

    Jesús F. Salgado


    Full Text Available There is criticism in the literature about the use of interrater coefficients to correct for criterion reliability in validity generalization (VG studies and disputing whether .52 is an accurate and non-dubious estimate of interrater reliability of overall job performance (OJP ratings. We present a second-order meta-analysis of three independent meta-analytic studies of the interrater reliability of job performance ratings and make a number of comments and reflections on LeBreton et al.s paper. The results of our meta-analysis indicate that the interrater reliability for a single rater is .52 (k = 66, N = 18,582, SD = .105. Our main conclusions are: (a the value of .52 is an accurate estimate of the interrater reliability of overall job performance for a single rater; (b it is not reasonable to conclude that past VG studies that used .52 as the criterion reliability value have a less than secure statistical foundation; (c based on interrater reliability, test-retest reliability, and coefficient alpha, supervisor ratings are a useful and appropriate measure of job performance and can be confidently used as a criterion; (d validity correction for criterion unreliability has been unanimously recommended by "classical" psychometricians and I/O psychologists as the proper way to estimate predictor validity, and is still recommended at present; (e the substantive contribution of VG procedures to inform HRM practices in organizations should not be lost in these technical points of debate.

  18. Assessing motivation for work environment improvements: internal consistency, reliability and factorial structure.

    Hedlund, Ann; Ateg, Mattias; Andersson, Ing-Marie; Rosén, Gunnar


    Workers' motivation to actively take part in improvements to the work environment is assumed to be important for the efficiency of investments for that purpose. That gives rise to the need for a tool to measure this motivation. A questionnaire to measure motivation for improvements to the work environment has been designed. Internal consistency and test-retest reliability of the domains of the questionnaire have been measured, and the factorial structure has been explored, from the answers of 113 employees. The internal consistency is high (0.94), as well as the correlation for the total score (0.84). Three factors are identified accounting for 61.6% of the total variance. The questionnaire can be a useful tool in improving intervention methods. The expectation is that the tool can be useful, particularly with the aim of improving efficiency of companies' investments for work environment improvements. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. The memory failures of everyday questionnaire (MFE): internal consistency and reliability.

    Montejo Carrasco, Pedro; Montenegro, Peña Mercedes; Sueiro, Manuel J


    The Memory Failures of Everyday Questionnaire (MFE) is one of the most widely-used instruments to assess memory failures in daily life. The original scale has nine response options, making it difficult to apply; we created a three-point scale (0-1-2) with response choices that make it easier to administer. We examined the two versions' equivalence in a sample of 193 participants between 19 and 64 years of age. The test-retest reliability and internal consistency of the version we propose were also computed in a sample of 113 people. Several indicators attest to the two forms' equivalence: the correlation between the items' means (r = .94; p MFE 1-9. The MFE 0-2 provides a brief, simple evaluation, so we recommend it for use in clinical practice as well as research.

  20. A reliable and consistent production technology for high volume compacted graphite iron castings

    Liu Jincheng


    The demands for improved engine performance, fuel economy, durability, and lower emissions provide a continual chalenge for engine designers. The use of Compacted Graphite Iron (CGI) has been established for successful high volume series production in the passenger vehicle, commercial vehicle and industrial power sectors over the last decade. The increased demand for CGI engine components provides new opportunities for the cast iron foundry industry to establish efficient and robust CGI volume production processes, in China and globaly. The production window range for stable CGI is narrow and constantly moving. Therefore, any one step single addition of magnesium aloy and the inoculant cannot ensure a reliable and consistent production process for complicated CGI engine castings. The present paper introduces the SinterCast thermal analysis process control system that provides for the consistent production of CGI with low nodularity and reduced porosity, without risking the formation of lfake graphite. The technology is currently being used in high volume Chinese foundry production. The Chinese foundry industry can develop complicated high demand CGI engine castings with the proper process control technology.

  1. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena


    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  2. Stroke and aphasia quality of life scale in Kannada-evaluation of reliability, validity and internal consistency

    S Kiran


    Full Text Available Background: Quality of life (QoL dwells in a person′s overall well-being. Recently, QoL measures have become critical and relevant in stroke survivors. Instruments measuring QoL of individuals with aphasia are apparently rare in the Indian context. The present study aimed to develop a Kannada instrument to measure the QoL of people with aphasia. Study objectives were to validate Stroke and aphasia quality of life-39 (SAQOL-39 into Kannada, to measure test-retest reliability and internal consistency. Materials and Methods: The original English instrument was modified considering socio-cultural differences among native English and Kannada speakers. Cross-linguistic adaptation of SAQOL-39 into Kannada was carried out through forward-backward translation scheme. The scale was administered on 32 people from Karnataka (a state in India having aphasia. For a direct understanding of the subject′s QoL, scores were categorized into QoL severity levels. Item reliability of the Kannada version was examined by measuring Cronbach′s alpha. Test-retest reliability was examined by calculating the intraclass correlation coefficient (ICC. Results: Kannada SAQOL-39 showed good acceptability with minimum missing data and excellent test-retest reliability (ICC = 0.8. Value of Cronbach′s α observed for four items modified in the original version was 0.9 each and the mean α of all Kannada items was 0.9, demonstrating high internal consistency. Conclusions: The present study offers a valid, reliable tool to measure QoL in Kannada-speaking individuals with aphasia. This tool is useful in a cross-center, cross-national comparison of QoL data from people with aphasia. This instrument also permits direct translation into other Indian languages as the items are culturally validated to the Indian population. This study promotes future research using the Kannada SAQOL-39.

  3. PM2.5 data reliability, consistency, and air quality assessment in five Chinese cities

    Liang, Xuan; Li, Shuo; Zhang, Shuyi; Huang, Hui; Chen, Song Xi


    We investigate particulate matter (PM2.5) data reliability in five major Chinese cities: Beijing, Shanghai, Guangzhou, Chengdu, and Shenyang by cross-validating data from the U.S. diplomatic posts and the nearby Ministry of Environmental Protection sites based on 3 years' data from January 2013. The investigation focuses on the consistency in air quality assessment derived from the two data sources. It consists of studying (i) the occurrence length and percentage of different PM2.5 concentration ranges; (ii) the air quality assessment for each city; and (iii) the winter-heating effects in Beijing and Shenyang. Our analysis indicates that the two data sources produced highly consistent air quality assessment in the five cities. This is encouraging as it would inject a much needed confidence on the air pollution measurements from China. We also provide air quality assessments on the severity and trends of the fine particulate matter pollution in the five cities. The assessments are produced by statistically constructing the standard monthly meteorological conditions for each city, which are designed to minimize the effects of confounding factors due to yearly variations of some important meteorological variables. Our studies show that Beijing and Chengdu had the worst air quality, while Guangzhou and Shanghai faired the best among the five cities. Most of the five cities had their PM2.5 concentration decreased significantly in the last 2 years. By linking the air quality with the amount of energy consumed, our study suggests that the geographical configuration is a significant factor in a city's air quality management and economic development.

  4. Internal consistency and test-retest reliability of an instrumented functional reaching task using wireless electromyographic sensors.

    Varghese, Rini; Hui-Chan, Christina W Y; Wang, Edward; Bhatt, Tanvi


    The purpose of this study was to establish the internal consistency and test-retest reliability of the electromyographic and accelerometric data sampled from the prime movers of the dominant arm during an antigravity, within-arm's length stand-reaching task without trunk restraint. Ten healthy young adults participated in two experimental sessions, approximately 7-10days apart. During each session, subjects performed 15 trials of both a flexion- and an abduction-reaching task. Surface EMG and acceleration using wireless sensors were sampled from the anterior and middle deltoid. Reliability was established using Cronbach's alpha, intraclass correlation coefficients (ICC 2, k) and standard error of measurements (SEM) for electromyographic reaction time, burst duration and normalized amplitude along with peak acceleration. Results indicated high degrees of inter-trial and test-retest reliability for flexion (Cronbach's α range=0.92-0.99; ICC range=0.82-0.92) as well as abduction (Cronbach's α range=0.94-0.99; ICC range=0.81-0.94) reaching. The SEM associated with response variables for flexion and abduction ranged from 1.55-3.26% and 3.33-3.95% of means, respectively. Findings from this study revealed that electromyographic and accelerometric data collected from prime movers of the arm during the relatively functional stand-reaching task were highly reproducible. Given its high reliability and portability, the proposed test could have applications in clinical and laboratory settings to quantify upper limb function.

  5. Planck 2013 results. XXXI. Consistency of the Planck data

    Ade, P. A. R.; Arnaud, M.; Ashdown, M.


    by deviation of the ratio from unity) between 70 and 100 GHz power spectra averaged over 70 ≤∫≥ 390 at the 0.8% level, and agreement between 143 and 100 GHz power spectra of 0.4% over the same ` range. These values are within and consistent with the overall uncertainties in calibration given in the Planck 2013...... foreground emission. In this paper, we analyse the level of consistency achieved in the 2013 Planck data. We concentrate on comparisons between the 70, 100, and 143 GHz channel maps and power spectra, particularly over the angular scales of the first and second acoustic peaks, on maps masked for diuse....../100 ratio. Correcting for this, the 70, 100, and 143 GHz power spectra agree to 0.4% over the first two acoustic peaks. The likelihood analysis that produced the 2013 cosmological parameters incorporated uncertainties larger than this. We show explicitly that correction of the missing near sidelobe power...

  6. Public Perceptions of Reliability in Examination Results in England

    He, Qingping; Boyle, Andrew; Opposs, Dennis


    Building on findings from existing qualitative research into public perceptions of reliability in examination results in England, a questionnaire was developed and administered to samples of teachers, students and employers to study their awareness of and opinions about various aspects of reliability quantitatively. Main findings from the study…

  7. Planck 2013 results. XXXI. Consistency of the Planck data

    Ade, P A R; Ashdown, M; Aumont, J; Baccigalupi, C; Banday, A.J; Barreiro, R.B; Battaner, E; Benabed, K; Benoit-Levy, A; Bernard, J.P; Bersanelli, M; Bielewicz, P; Bond, J.R; Borrill, J; Bouchet, F.R; Burigana, C; Cardoso, J.F; Catalano, A; Challinor, A; Chamballu, A; Chiang, H.C; Christensen, P.R; Clements, D.L; Colombi, S; Colombo, L.P.L; Couchot, F; Coulais, A; Crill, B.P; Curto, A; Cuttaia, F; Danese, L; Davies, R.D; Davis, R.J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Desert, F.X; Dickinson, C; Diego, J.M; Dole, H; Donzelli, S; Dore, O; Douspis, M; Dupac, X; Ensslin, T.A; Eriksen, H.K; Finelli, F; Forni, O; Frailis, M; Fraisse, A A; Franceschi, E; Galeotta, S; Ganga, K; Giard, M; Gonzalez-Nuevo, J; Gorski, K.M.; Gratton, S.; Gregorio, A; Gruppuso, A; Gudmundsson, J E; Hansen, F.K; Hanson, D; Harrison, D; Henrot-Versille, S; Herranz, D; Hildebrandt, S.R; Hivon, E; Hobson, M; Holmes, W.A.; Hornstrup, A; Hovest, W.; Huffenberger, K.M; Jaffe, T.R; Jaffe, A.H; Jones, W.C; Keihanen, E; Keskitalo, R; Knoche, J; Kunz, M; Kurki-Suonio, H; Lagache, G; Lahteenmaki, A; Lamarre, J.M; Lasenby, A; Lawrence, C.R; Leonardi, R; Leon-Tavares, J; Lesgourgues, J; Liguori, M; Lilje, P.B; Linden-Vornle, M; Lopez-Caniego, M; Lubin, P.M; Macias-Perez, J.F; Maino, D; Mandolesi, N; Maris, M; Martin, P.G; Martinez-Gonzalez, E; Masi, S; Matarrese, S; Mazzotta, P; Meinhold, P.R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschenes, M.A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Moss, A; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Norgaard-Nielsen, H.U; Noviello, F; Novikov, D; Novikov, I; Oxborrow, C.A; Pagano, L; Pajot, F; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Pearson, D; Pearson, T.J; Perdereau, O; Perrotta, F; Piacentini, F; Piat, M; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Pratt, G.W; Prunet, S; Puget, J.L; Rachen, J.P; Reinecke, M; Remazeilles, M; Renault, C; Ricciardi, S.; Ristorcelli, I; Rocha, G.; Roudier, G; Rubino-Martin, J.A; Rusholme, B; Sandri, M; Scott, D; Stolyarov, V; Sudiwala, R; Sutton, D; Suur-Uski, A.S; Sygnet, J.F; Tauber, J.A; Terenzi, L; Toffolatti, L; Tomasi, M; Tristram, M; Tucci, M; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Wade, L.A; Wandelt, B.D; Wehus, I K; White, S D M; Yvon, D; Zacchei, A; Zonca, A


    The Planck design and scanning strategy provide many levels of redundancy that can be exploited to provide tests of internal consistency. One of the most important is the comparison of the 70 GHz (amplifier) and 100 GHz (bolometer) channels. Based on different instrument technologies, with feeds located differently in the focal plane, analysed independently by different teams using different software, and near the minimum of diffuse foreground emission, these channels are in effect two different experiments. The 143 GHz channel has the lowest noise level on Planck, and is near the minimum of unresolved foreground emission. In this paper, we analyse the level of consistency achieved in the 2013 Planck data. We concentrate on comparisons between the 70, 100, and 143 GHz channel maps and power spectra, particularly over the angular scales of the first and second acoustic peaks, on maps masked for diffuse Galactic emission and for strong unresolved sources. Difference maps covering angular scales from 8°...

  8. Estimation of Internal Consistency Reliability When Test Parts Vary in Effective Length.

    Feldt, Leonard S.; Charter, Richard A.


    Evaluating a test's reliability often requires dividing it into 3 or more unequal parts, which causes violation of the tau equivalence assumption of Cronbach's alpha. This article presents a criterion for abandoning alpha and an approach for computing a more appropriate estimate of reliability, the Gilmer-Feldt coefficient. (Author)

  9. When does inconsistency hurt? On the relation between phonological-consistency effects and the reliability of sublexical units

    Martensen, H.E.; Maris, E.G.G.; Dijkstra, A.F.J.


    Phonological consistency describes to what extent a letter string in one word is pronounced equally in other words. Phonological reliability describes to what extent a sublexical unit is usually consistent throughout one language. The relationship between the two concepts was investigated by

  10. Results from the LHC Beam Dump Reliability Run

    Uythoven, J; Carlier, E; Castronuovo, F; Ducimetière, L; Gallet, E; Goddard, B; Magnin, N; Verhagen, H


    The LHC Beam Dumping System is one of the vital elements of the LHC Machine Protection System and has to operate reliably every time a beam dump request is made. Detailed dependability calculations have been made, resulting in expected rates for the different system failure modes. A 'reliability run' of the whole system, installed in its final configuration in the LHC, has been made to discover infant mortality problems and to compare the occurrence of the measured failure modes with their calculations.

  11. Gestalt assessment of online educational resources may not be sufficiently reliable and consistent.

    Krishnan, Keeth; Thoma, Brent; Trueger, N Seth; Lin, Michelle; Chan, Teresa M


    Online open educational resources are increasingly used in medical education, particularly blogs and podcasts. However, it is unclear whether these resources can be adequately appraised by end-users. Our goal was to determine whether gestalt-based recommendations are sufficient for emergency medicine trainees and attending physicians to reliably recommend online educational resources to others. Raters (33 trainees and 21 attendings in emergency medicine from North America) were asked to rate 40 blog posts according to whether, based on their gestalt, they would recommend the resource to (1) a trainee or (2) an attending physician. The ratings' reliability was assessed using intraclass correlation coefficients (ICC). Associations between groups' mean scores were assessed using Pearson's r. A repeated measures analysis of variance (RM-ANOVA) was completed to determine the effect of the level of training on gestalt recommendation scale (i. e. trainee vs. attending). Trainees demonstrated poor reliability when recommending resources for other trainees (ICC = 0.21, 95% CI 0.13-0.39) and attendings (ICC = 0.16, 95% CI = 0.09-0.30). Similarly, attendings had poor reliability when recommending resources for trainees (ICC = 0.27, 95% CI 0.18-0.41) and other attendings (ICC = 0.22, 95% CI 0.14-0.35). There were moderate correlations between the mean scores for each blog post when either trainees or attendings considered the same target audience. The RM-ANOVA also corroborated that there is a main effect of the proposed target audience on the ratings by both trainees and attendings. A gestalt-based rating system is not sufficiently reliable when recommending online educational resources to trainees and attendings. Trainees' gestalt ratings for recommending resources for both groups were especially unreliable. Our findings suggest the need for structured rating systems to rate online educational resources.

  12. Online cognition: factors facilitating reliable online neuropsychological test results.

    Feenstra, Heleen E M; Vermeulen, Ivar E; Murre, Jaap M J; Schagen, Sanne B


    Online neuropsychological test batteries could allow for large-scale cognitive data collection in clinical studies. However, the few online neuropsychological test batteries that are currently available often still require supervision or lack proper psychometric evaluation. In this paper, we have outlined prerequisites for proper development and use of online neuropsychological tests, with the focus on reliable measurement of cognitive function in an unmonitored setting. First, we identified several technical, contextual, and psychological factors that should be taken into account in order to facilitate reliable test results of online tests in the unmonitored setting. Second, we outlined a methodology of quality assurance needed in order to obtain reliable cognitive data in the long run. Based on factors that distinguish the online unmonitored test setting from the traditional face-to-face setting, we provide a set of basic requirements and suggestions for optimal development and use of unmonitored online neuropsychological tests, including suggestions on acquiring reliability, validity, and norm scores. When properly addressing factors that could hamper reliable test results during development and use, online neuropsychological tests could aid large-scale data collection for clinical studies in the future. Investment in both proper development of online neuropsychological test platforms and the performance of accompanying psychometric studies is currently required.

  13. Reliability and M. T. T. F. analysis of a power plant consisting of three generators by Boolean Function technique

    Gupta, P.P.; Sharma, R.K.


    The reliability behaviour of a non-repairable parallel redundant complex system which is nothing but a power plant is investigated. The object of the system is to supply power generated by three generators from a power house to a very critical consumer, connected by cables and switches etc. The reliability of the power supply to the critical consumer has been obtained by using the Boolean Function technique. Moreover, an important parameter of reliability, viz. M.T.T.F. (mean time to failure), has also been computed for exponential failure rates of components. A numerical example with graphs has also been appended in the end to highlight the important results.

  14. A one-step immune-chromatographic Helicobacter pylori stool antigen test for children was quick, consistent, reliable and specific.

    Kalach, Nicolas; Gosset, Pierre; Dehecq, Eric; Decoster, Anne; Georgel, Anne-France; Spyckerelle, Claire; Papadopoulos, Stephanos; Dupont, Christophe; Raymond, Josette


    This French study assessed a quick, noninvasive, immuno-chromatographic, Helicobacter pylori (H. pylori) stool antigen test for detecting infections in children. We enrolled 158 children, with a median age of 8.5 years (range eight months to 17 years), with digestive symptoms suggesting upper gastrointestinal tract disease. Upper digestive endoscopy was performed with gastric biopsy specimens for histology, a rapid urease test, culture test and quantitative real-time polymerase chain reaction. The H. pylori stool antigen test was performed twice for each child and the results were compared to the reference method. The reference methods showed that 23 (14.6%) of the 158 children tested were H. pylori positive. The H. pylori stool antigen test showed 91.3% sensitivity, with a 95% confidence interval (95% CI) of 86.9-95.6 and 97% specificity (95% CI 94.3-99.6), 30.84 positive likelihood ratio and 0.09 negative likelihood ratio. The test accuracy was 96.2% (95% CI 93.2-99.1). The two blinded independent observers produced identical H. pylori stool antigen test results and the Kappa coefficient for the H. pylori stool antigen test was one. The H. pylori stool antigen test was found to be a consistent, reliable, quick and specific test for detecting the H. pylori infection in children. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  15. Investigation for Ensuring the Reliability of the MELCOR Analysis Results

    Sung, Joonyoung; Maeng, Yunhwan; Lee, Jaeyoung [Handong Global Univ., Pohang (Korea, Republic of)


    Flow rate could be also main factor to be proven because it is in charge of a role which takes thermal balance through heat transfer in inner side of fuel assembly. Some problems about a reliability of MELCOR results could be posed in the 2{sup nd} technical report of NSRC project. In order to confirm whether MELCOR results are dependable, experimental data of Sandia Fuel Project 1 phase were used to be compared to be a reference. In Spent Fuel Pool (SFP) severe accident, especially in case of boil-off, partial loss of coolant accident, and complete loss of coolant accident; heat source and flow rate could be main points to analyze the MELCOR results. Heat source might be composed as decay heat and oxidation heat. Because heat source makes it possible to lead a zirconium fire situation if it is satisfied that heat accumulates in spent fuel rod and then cladding temperature could be raised continuously to be generated an oxidation heat, this might be a main factor to be confirmed. This work was proposed to investigate reliability of MELCOR results in order to confirm physical phenomena if SFP severe accident is occurred. Almost results showed that MELCOR results were significantly different by minute change of main parameter in identical condition. Therefore it could be necessary that oxidation coefficients have to be chosen as value to delineate real phenomena as possible.

  16. Some Results on the Overall Reliability of Undirected Graphs.


    Al A SATYANARAYANA , M K CHANG , Z S KHALIL N000iG 75-C OT8l UNCLASSIFIED ORC-81-2 NL ’MENOMONEfflfflsonf Nonn- 001 MICROCOPY 1?I Sot[JilION I l.I...HIAR I SOME RESULTS ON THE OVERALL RELIABILITY OF UNDIRECTED GRAPHS by A. Satyanarayana and Hark K. Chang Operations Research Center University of...liblity:J OprionseeGraphCne UnivCerity em of Californi NR 042 238 (Sreey ASRClfrn) 92 11 CWVILUOOFIEro ONA ME V D 6 A O SSE ~ ~Off147 of Naa Ree@arch

  17. Assessing the Reliability of Geoelectric Imaging Results for Permafrost Investigations

    Marescot, L.; Loke, M.; Abbet, D.; Delaloye, R.; Hauck, C.; Hilbich, C.; Lambiel, C.; Reynard, E.


    The effects of global climate change on mountain permafrost are of increasing concern; warming thaws permafrost, thereby increasing the risk of slope instabilities. Consequently, knowledge of the extent and location of permafrost are important for construction and other geotechnical and land-management activities in mountainous areas. Geoelectric imaging is a useful tool for mapping and characterizing permafrost occurrences. To overcome the generally poor electrical contacts in the active layer, geoelectric surveys usually involve coupling the electrodes to the ground via sponges soaked in salt water. The data are processed and inverted in terms of resistivity models of the subsurface. To monitor the evolution of mountain permafrost, time-lapse geoelectric imaging may be employed. A challenging aspect in geoelectric imaging of permafrost is the very large resistivity contrast between frozen and unfrozen material. Such a contrast makes inversion and interpretation difficult. To assess whether features at depth are required by the data or are artifacts of the inversion process, the reliability of models needs to be evaluated. We use two different approaches to assess the reliability of resistivity images in permafrost investigations: (i) depth of investigation (DOI) and (ii) resolution matrix maps. To compute the DOI, two inversions of the same data set using quite different reference resistivity models are carried out. At locations where the resistivity is well constrained by the data, the inversions yield the same results. At other locations, the inversions yield different values that are controlled by the reference models. The resolution matrix, which is based on the sensitivity matrix calculated during the inversion, quantifies the degree to which each resistivity cell in the model can be resolved by the data. Application of these two approaches to field data acquired in the Swiss Alps and Jura Mountains suggests that it is very difficult to obtain dependable

  18. Test-Retest Reliability and Internal Consistency of the Activity Card Sort-Australia (18-64).

    Gustafsson, Louise; Hung, Inez Hui Min; Liddle, Jacki


    The Activity Card Sort (ACS) measures activity engagement levels. The Activity Card Sort-Australian version for adults aged 18 to 64 (ACS-Aus (18-64)) was recently developed, and psychometric properties have not yet been determined. This study was established to determine the test-retest reliability and internal consistency of the ACS-Aus (18-64) and describe activity engagement trends for healthy adults. Fifty-four adults aged 18 to 64 participated in this descriptive study. The ACS-Aus (18-64) demonstrated excellent test-retest reliability ( r = .92, p maintenance activities ( t = -2.22, p = .03), and recreation and relaxation activities ( t = -2.38, p = .02). The ACS-Aus (18-64) may be used to explore the activity engagement patterns of community-dwelling Australian adults aged 18 to 64. Further research will determine validity for clinical populations.

  19. Assessment of the reliability and consistency of the "malnutrition inflammation score" (MIS) in Mexican adults with chronic kidney disease for diagnosis of protein-energy wasting syndrome (PEW).

    González-Ortiz, Ailema Janeth; Arce-Santander, Celene Viridiana; Vega-Vega, Olynka; Correa-Rotter, Ricardo; Espinosa-Cuevas, María de Los Angeles


    The protein-energy wasting syndrome (PEW) is a condition of malnutrition, inflammation, anorexia and wasting of body reserves resulting from inflammatory and non-inflammatory conditions in patients with chronic kidney disease (CKD).One way of assessing PEW, extensively described in the literature, is using the Malnutrition Inflammation Score (MIS). To assess the reliability and consistency of MIS for diagnosis of PEW in Mexican adults with CKD on hemodialysis (HD). Study of diagnostic tests. A sample of 45 adults with CKD on HD were analyzed during the period June-July 2014.The instrument was applied on 2 occasions; the test-retest reliability was calculated using the Intraclass Correlation Coefficient (ICC); the internal consistency of the questionnaire was analyzed using Cronbach's αcoefficient. A weighted Kappa test was used to estimate the validity of the instrument; the result was subsequently compared with the Bilbrey nutritional index (BNI). The reliability of the questionnaires, evaluated in the patient sample, was ICC=0.829.The agreement between MIS observations was considered adequate, k= 0.585 (p MIS has adequate reliability and validity for diagnosing PEW in the population with chronic kidney disease on HD. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  20. Telomere Q-PNA-FISH--reliable results from stochastic signals.

    Andrea Cukusic Kalajzic

    Full Text Available Structural and functional analysis of telomeres is very important for understanding basic biological functions such as genome stability, cell growth control, senescence and aging. Recently, serious concerns have been raised regarding the reliability of current telomere measurement methods such as Southern blot and quantitative polymerase chain reaction. Since telomere length is associated with age related pathologies, including cardiovascular disease and cancer, both at the individual and population level, accurate interpretation of measured results is a necessity. The telomere Q-PNA-FISH technique has been widely used in these studies as well as in commercial analysis for the general population. A hallmark of telomere Q-PNA-FISH is the wide variation among telomere signals which has a major impact on obtained results. In the present study we introduce a specific mathematical and statistical analysis of sister telomere signals during cell culture senescence which enabled us to identify high regularity in their variations. This phenomenon explains the reproducibility of results observed in numerous telomere studies when the Q-PNA-FISH technique is used. In addition, we discuss the molecular mechanisms which probably underlie the observed telomere behavior.

  1. Evaluating Test Reliability: From Coefficient Alpha to Internal Consistency Reliability%测验信度估计:从α系数到内部一致性信度

    温忠麟; 叶宝娟


    沿用经典的测验信度定义,简介了信度与α系数的关系以及α系数的局限.为了推荐替代α系数的信度估计方法,深入讨论了与α系数关系密切的同质性信度和内部一致性信度.在很一般的条件下,证明了α系数和同质性信度都不超过内部一致性信度,后者不超过测验信度,说明内部一致性信度比较接近测验信度.总结出一个测验信度分析流程,说明什么情况下α系数还有参考价值;什么情况下α系数不再适用,应当使用内部一致性信度(文献上也常称为合成信度).提供了计算同质性信度和内部一致性信度的计算程序,一般的应用工作者可以直接套用.%In the research of psychology and other social sciences, test reliability is often used to reflect measurement stability and consistency. Coefficient a is the most popular indicator of test reliability. Recent years, however, coefficient a was challenged now and again. Is coefficient a still recommended for evaluating test reliability? If not, what should replace it?With the classical concept of reliability, which is defined as the ratio of true variance to observed variance on a test under consideration, we introduced the relationship between test reliability and coefficient a, and the limitations of coefficient a. The concepts closely related to coefficient a were considered. We clearly defined homogeneity reliability and internal consistency reliability. Homogeneity reflects the presence of a general factor, whereas internal consistency relates the presence of common factors (including a general factor and local factors). For unidimensional tests, homogeneity and internal consistency are the same concept. Investigating the relationship between test reliability, coefficient o, homogeneity reliability, and internal consistency reliability, we showed that homogeneity reliability is not larger than internal consistency reliability, and that the latter is not larger than test

  2. Credible Mechanism for More Reliable SearchEngine Results

    Mohammed Abdel Razek


    Full Text Available the number of websites on the Internet is growing randomly, thanks to HTML language. Consequently, a diversity of information is available on the Web, however, sometimes the content of it may be neither valuable nor trusted. This leads to a problem of a credibility of the existing information on these Websites. This paper investigates aspects affecting on the Websites credibility and then uses them along with dominant meaning of the query for improving information retrieval capabilities and to effectively manage contents. It presents a design and development of a credible mechanism that searches Web search engine and then ranks sites according to its reliability. Our experiments show that the credibility terms on the Websites can affect the ranking of the Web search engine and greatly improves retrieval effectiveness.

  3. Reliability and the ACTFL Oral Proficiency Interview: Reporting Indices of Interrater Consistency and Agreement for 19 Languages

    Surface, Eric A.; Dierdorff, Erich C.


    The reliability of the ACTFL Oral Proficiency Interview (OPI) has not been reported since ACTFL revised its speaking proficiency guidelines in 1999. Reliability data for assessments should be reported periodically to provide users with enough information to evaluate the psychometric characteristics of the assessment. This study provided the most…

  4. Frontiers of reliability

    Basu, Asit P; Basu, Sujit K


    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  5. Validity and reliability of patient reported outcomes used in Psoriasis: results from two randomized clinical trials

    Koo John


    Full Text Available Abstract Background Two Phase III randomized controlled clinical trials were conducted to assess the efficacy, safety, and tolerability of weekly subcutaneous administration of efalizumab for the treatment of psoriasis. Patient reported measures of psoriasis-related functionality and health-related quality of life and of psoriasis-related symptom assessments were included as part of the trials. Objective To assess the reliability, validity, and responsiveness of the patient reported outcome measures that were used in the trials – the Dermatology Life Quality Index (DLQI, the Psoriasis Symptom Assessment (PSA Scale, and two itch measures, a Visual Analog Scale (VAS and the National Psoriasis Foundation (NPF itch measure. Methods Subjects aged 18 to 70 years with moderate to severe psoriasis for at least 6 months were recruited into the two clinical trials (n = 1095. Internal consistency reliability was evaluated for all patient reported outcomes at baseline and at 12 weeks. Construct validity was evaluated by relations among the different patient reported outcomes and between the patient reported outcomes and the clinical assessments (Psoriasis Area and Severity Index; Overall Lesion Severity Scale; Physician's Global Assessment of Change assessed at baseline and at 12 weeks, as was the change over the course of the 12 week portion of the trial. Results Internal consistency reliability ranged from 0.86 to 0.95 for the patient reported outcome measures. The patient reported outcome measures were all shown to have significant construct validity with respect to each other and with respect to the clinical assessments. The four measures also demonstrated significant responsiveness to change in underlying clinical status of the patients over the course of the trial, as measured by the independently assessed clinical outcomes. Conclusions The DLQI, the PSA, VAS, and the NPF are considered useful tools for the measurement of dermatology

  6. A review of culturally adapted versions of the Oswestry Disability Index: the adaptation process, construct validity, test-retest reliability and internal consistency.

    Sheahan, Peter J; Nelson-Wong, Erika J; Fischer, Steven L


    The Oswestry Disability Index (ODI) is a self-report-based outcome measure used to quantify the extent of disability related to low back pain (LBP), a substantial contributor to workplace absenteeism. The ODI tool has been adapted for use by patients in several non-English speaking nations. It is unclear, however, if these adapted versions of the ODI are as credible as the original ODI developed for English-speaking nations. The objective of this study was to conduct a review of the literature to identify culturally adapted versions of the ODI and to report on the adaptation process, construct validity, test-retest reliability and internal consistency of these ODIs. Following a pragmatic review process, data were extracted from each study with regard to these four outcomes. While most studies applied adaptation processes in accordance with best-practice guidelines, there were some deviations. However, all studies reported high-quality psychometric properties: group mean construct validity was 0.734 ± 0.094 (indicated via a correlation coefficient), test-retest reliability was 0.937 ± 0.032 (indicated via an intraclass correlation coefficient) and internal consistency was 0.876 ± 0.047 (indicated via Cronbach's alpha). Researchers can be confident when using any of these culturally adapted ODIs, or when comparing and contrasting results between cultures where these versions were employed. Implications for Rehabilitation Low back pain is the second leading cause of disability in the world, behind only cancer. The Oswestry Disability Index (ODI) has been developed as a self-report outcome measure of low back pain for administration to patients. An understanding of the various cross-cultural adaptations of the ODI is important for more concerted multi-national research efforts. This review examines 16 cross-cultural adaptations of the ODI and should inform the work of health care and rehabilitation professionals.

  7. Enzymatic digestion of articular cartilage results in viscoelasticity changes that are consistent with polymer dynamics mechanisms

    June Ronald K


    Full Text Available Abstract Background Cartilage degeneration via osteoarthritis affects millions of elderly people worldwide, yet the specific contributions of matrix biopolymers toward cartilage viscoelastic properties remain unknown despite 30 years of research. Polymer dynamics theory may enable such an understanding, and predicts that cartilage stress-relaxation will proceed faster when the average polymer length is shortened. Methods This study tested whether the predictions of polymer dynamics were consistent with changes in cartilage mechanics caused by enzymatic digestion of specific cartilage extracellular matrix molecules. Bovine calf cartilage explants were cultured overnight before being immersed in type IV collagenase, bacterial hyaluronidase, or control solutions. Stress-relaxation and cyclical loading tests were performed after 0, 1, and 2 days of incubation. Results Stress-relaxation proceeded faster following enzymatic digestion by collagenase and bacterial hyaluronidase after 1 day of incubation (both p ≤ 0.01. The storage and loss moduli at frequencies of 1 Hz and above were smaller after 1 day of digestion by collagenase and bacterial hyaluronidase (all p ≤ 0.02. Conclusion These results demonstrate that enzymatic digestion alters cartilage viscoelastic properties in a manner consistent with polymer dynamics mechanisms. Future studies may expand the use of polymer dynamics as a microstructural model for understanding the contributions of specific matrix molecules toward tissue-level viscoelastic properties.

  8. Internal consistency, test-retest reliability, and predictive validity for a Likert-based version of the Sources of occupational stress-14 (SOOS-14) scale.

    Kimbrel, Nathan A; Flynn, Elisa J; Carpenter, Grace Stephanie J; Cammarata, Claire M; Leto, Frank; Ostiguy, William J; Kamholz, Barbara W; Zimering, Rose T; Gulliver, Suzy B


    This study examined the psychometric properties of a Likert-based version of the Sources of Occupational Stress-14 (SOOS-14) scale. Internal consistency for the SOOS-14 ranged from 0.78-0.84, whereas three-month test-retest reliability was 0.51. In addition, SOOS-14 scores were prospectively associated with symptoms of PTSD and depression at a three-month follow-up assessment. Published by Elsevier Ireland Ltd.

  9. Loss of fibrinogen in zebrafish results in symptoms consistent with human hypofibrinogenemia.

    Andy H Vo

    Full Text Available Cessation of bleeding after trauma is a necessary evolutionary vertebrate adaption for survival. One of the major pathways regulating response to hemorrhage is the coagulation cascade, which ends with the cleavage of fibrinogen to form a stable clot. Patients with low or absent fibrinogen are at risk for bleeding. While much detailed information is known about fibrinogen regulation and function through studies of humans and mammalian models, bleeding risk in patients cannot always be accurately predicted purely based on fibrinogen levels, suggesting an influence of modifying factors and a need for additional genetic models. The zebrafish has orthologs to the three components of fibrinogen (fga, fgb, and fgg, but it hasn't yet been shown that zebrafish fibrinogen functions to prevent bleeding in vivo. Here we show that zebrafish fibrinogen is incorporated into an induced thrombus, and deficiency results in hemorrhage. An Fgb-eGFP fusion protein is incorporated into a developing thrombus induced by laser injury, but causes bleeding in adult transgenic fish. Antisense morpholino knockdown results in intracranial and intramuscular hemorrhage at 3 days post fertilization. The observed phenotypes are consistent with symptoms exhibited by patients with hypo- and afibrinogenemia. These data demonstrate that zebrafish possess highly conserved orthologs of the fibrinogen chains, which function similarly to mammals through the formation of a fibrin clot.

  10. Consistency of non-flat $\\Lambda$CDM model with the new result from BOSS

    Kumar, Suresh


    Using 137,562 quasars in the redshift range $2.1\\leq z\\leq3.5$ from the Data Release 11 (DR11) of the Baryon Oscillation Spectroscopic Survey (BOSS) of Sloan Digital Sky Survey (SDSS)-III, the BOSS-SDSS collaboration estimated the expansion rate $H(z=2.34)=222\\pm7$ km/s/Mpc of Universe, and reported that this value is in tension with the predictions of flat $\\Lambda$CDM model at around 2.5$\\sigma$ level. In this letter, we briefly describe some attempts made in the literature to relieve the tension, and show that the tension can naturally be alleviated in non-flat $\\Lambda$CDM model with positive curvature. However, this idea confronts with the inflation paradigm which predicts almost a spatially flat Universe. Nevertheless, the theoretical consistency of the non-flat $\\Lambda$CDM model with the new result from BOSS deserves attention of the community.

  11. Iterative reconstruction for quantitative computed tomography analysis of emphysema: consistent results using different tube currents

    Yamashiro T


    Full Text Available Tsuneo Yamashiro,1 Tetsuhiro Miyara,1 Osamu Honda,2 Noriyuki Tomiyama,2 Yoshiharu Ohno,3 Satoshi Noma,4 Sadayuki Murayama1 On behalf of the ACTIve Study Group 1Department of Radiology, Graduate School of Medical Science, University of the Ryukyus, Nishihara, Okinawa, Japan; 2Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan; 3Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan; 4Department of Radiology, Tenri Hospital, Tenri, Nara, Japan Purpose: To assess the advantages of iterative reconstruction for quantitative computed tomography (CT analysis of pulmonary emphysema. Materials and methods: Twenty-two patients with pulmonary emphysema underwent chest CT imaging using identical scanners with three different tube currents: 240, 120, and 60 mA. Scan data were converted to CT images using Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D and a conventional filtered-back projection mode. Thus, six scans with and without AIDR3D were generated per patient. All other scanning and reconstruction settings were fixed. The percent low attenuation area (LAA%; < -950 Hounsfield units and the lung density 15th percentile were automatically measured using a commercial workstation. Comparisons of LAA% and 15th percentile results between scans with and without using AIDR3D were made by Wilcoxon signed-rank tests. Associations between body weight and measurement errors among these scans were evaluated by Spearman rank correlation analysis. Results: Overall, scan series without AIDR3D had higher LAA% and lower 15th percentile values than those with AIDR3D at each tube current (P<0.0001. For scan series without AIDR3D, lower tube currents resulted in higher LAA% values and lower 15th percentiles. The extent of emphysema was significantly different between each pair among scans when not using AIDR3D (LAA%, P<0.0001; 15th percentile, P<0.01, but was not

  12. Consistent and inconsistent truncations. General results and the issue of the correct uplifting of solutions

    Pons, J M; Pons, Josep M.; Talavera, Pere


    We clarify the existence of two different types of truncations of the field content in a theory, the consistency of each type being achieved by different means. A proof is given of the conditions to have a consistent truncation in the case of dimensional reductions induced by independent Killing vectors. We explain in what sense the tracelessness condition found by Scherk and Scharwz is not only a necessary condition but also a {\\it sufficient} one for a consistent truncation. The reduction of the gauge group is fully performed showing the existence of a sector of rigid symmetries. We show that truncations originated by the introduction of constraints will in general be inconsistent, but this fact does not prevent the possibility of correct upliftings of solutions in some cases. The presence of constraints has dynamical consequences that turn out to play a fundamental role in the correctness of the uplifting procedure.

  13. Gas cooling in semi-analytic models and smoothed particle hydrodynamics simulations: are results consistent?

    Saro, A.; De Lucia, G.; Borgani, S.; Dolag, K.


    We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical smoothed particle hydrodynamics (SPH) simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. This simplified comparison is thus not meant to be compared with observational data, but is aimed at understanding the level of agreement, at the stripped-down level considered, between two techniques that are widely used to model galaxy formation in a cosmological framework and which present complementary advantages and disadvantages. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: (i) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; (ii) while all stars associated with the BCG were formed in its progenitors in the SAM used here, this holds true only for half of the final BCG stellar mass in the SPH simulation, the remaining half being contributed by tidal stripping of stars from the diffuse stellar component associated with galaxies accreted on the cluster halo; (iii) SPH satellites can lose up to 90 per cent of their stellar mass at the time of accretion, due to tidal stripping, a process not included in the SAM used in this paper; (iv) in the SPH simulation, significant cooling occurs on the most massive satellite galaxies and this lasts for up to 1 Gyr after accretion. This physical process is

  14. High-Dose-Rate Prostate Brachytherapy Consistently Results in High Quality Dosimetry

    White, Evan C.; Kamrava, Mitchell R.; Demarco, John; Park, Sang-June; Wang, Pin-Chieh; Kayode, Oluwatosin; Steinberg, Michael L. [California Endocurietherapy at UCLA, Department of Radiation Oncology, David Geffen School of Medicine of University of California at Los Angeles, Los Angeles, California (United States); Demanes, D. Jeffrey, E-mail: [California Endocurietherapy at UCLA, Department of Radiation Oncology, David Geffen School of Medicine of University of California at Los Angeles, Los Angeles, California (United States)


    Purpose: We performed a dosimetry analysis to determine how well the goals for clinical target volume coverage, dose homogeneity, and normal tissue dose constraints were achieved with high-dose-rate (HDR) prostate brachytherapy. Methods and Materials: Cumulative dose-volume histograms for 208 consecutively treated HDR prostate brachytherapy implants were analyzed. Planning was based on ultrasound-guided catheter insertion and postoperative CT imaging; the contoured clinical target volume (CTV) was the prostate, a small margin, and the proximal seminal vesicles. Dosimetric parameters analyzed for the CTV were D90, V90, V100, V150, and V200. Dose to the urethra, bladder, bladder balloon, and rectum were evaluated by the dose to 0.1 cm{sup 3}, 1 cm{sup 3}, and 2 cm{sup 3} of each organ, expressed as a percentage of the prescribed dose. Analysis was stratified according to prostate size. Results: The mean prostate ultrasound volume was 38.7 {+-} 13.4 cm{sup 3} (range: 11.7-108.6 cm{sup 3}). The mean CTV was 75.1 {+-} 20.6 cm{sup 3} (range: 33.4-156.5 cm{sup 3}). The mean D90 was 109.2% {+-} 2.6% (range: 102.3%-118.4%). Ninety-three percent of observed D90 values were between 105 and 115%. The mean V90, V100, V150, and V200 were 99.9% {+-} 0.05%, 99.5% {+-} 0.8%, 25.4% {+-} 4.2%, and 7.8% {+-} 1.4%. The mean dose to 0.1 cm{sup 3}, 1 cm{sup 3}, and 2 cm{sup 3} for organs at risk were: Urethra: 107.3% {+-} 3.0%, 101.1% {+-} 14.6%, and 47.9% {+-} 34.8%; bladder wall: 79.5% {+-} 5.1%, 69.8% {+-} 4.9%, and 64.3% {+-} 5.0%; bladder balloon: 70.3% {+-} 6.8%, 59.1% {+-} 6.6%, and 52.3% {+-} 6.2%; rectum: 76.3% {+-} 2.5%, 70.2% {+-} 3.3%, and 66.3% {+-} 3.8%. There was no significant difference between D90 and V100 when stratified by prostate size. Conclusions: HDR brachytherapy allows the physician to consistently achieve complete prostate target coverage and maintain normal tissue dose constraints for organs at risk over a wide range of target volumes.

  15. Impact of Alzheimer's Disease on Caregiver Questionnaire: internal consistency, convergent validity, and test-retest reliability of a new measure for assessing caregiver burden.

    Cole, Jason C; Ito, Diane; Chen, Yaozhu J; Cheng, Rebecca; Bolognese, Jennifer; Li-McLeod, Josephine


    There is a lack of validated instruments to measure the level of burden of Alzheimer's disease (AD) on caregivers. The Impact of Alzheimer's Disease on Caregiver Questionnaire (IADCQ) is a 12-item instrument with a seven-day recall period that measures AD caregiver's burden across emotional, physical, social, financial, sleep, and time aspects. Primary objectives of this study were to evaluate psychometric properties of IADCQ administered on the Web and to determine most appropriate scoring algorithm. A national sample of 200 unpaid AD caregivers participated in this study by completing the Web-based version of IADCQ and Short Form-12 Health Survey Version 2 (SF-12v2™). The SF-12v2 was used to measure convergent validity of IADCQ scores and to provide an understanding of the overall health-related quality of life of sampled AD caregivers. The IADCQ survey was also completed four weeks later by a randomly selected subgroup of 50 participants to assess test-retest reliability. Confirmatory factor analysis (CFA) was implemented to test the dimensionality of the IADCQ items. Classical item-level and scale-level psychometric analyses were conducted to estimate psychometric characteristics of the instrument. Test-retest reliability was performed to evaluate the instrument's stability and consistency over time. Virtually none (2%) of the respondents had either floor or ceiling effects, indicating the IADCQ covers an ideal range of burden. A single-factor model obtained appropriate goodness of fit and provided evidence that a simple sum score of the 12 items of IADCQ can be used to measure AD caregiver's burden. Scales-level reliability was supported with a coefficient alpha of 0.93 and an intra-class correlation coefficient (for test-retest reliability) of 0.68 (95% CI: 0.50-0.80). Low-moderate negative correlations were observed between the IADCQ and scales of the SF-12v2. The study findings suggest the IADCQ has appropriate psychometric characteristics as a

  16. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin


    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  17. [Study of the reliability, validity and internal consistency of the LSP scale (Life Skills Profile). Profile of activities of daily living].

    Fernández de Larrinoa Palacios, P; Bulbena Vilarrasa, A; Domínguez Panchón, A I


    The LSP scale (Life Skills profile) has been recently translated and adapted into Spanish language. Its has 39 items and attempts to measure the chronic mental patient functioning in situations and tasks of everyday life. It is brief, jargon free, capable of completion by family members, community housing managers as well as professional staff. Reliability, concurrent validity and internal consistency of this Spanish version are reported. The good performance of LSP in all this measurements give support to its use in research and clinical settings.

  18. Internal consistency reliability and construct validity of an Arabic translation of the shortened form of the Fennema-Sherman Mathematics Attitudes Scales.

    Alkhateeb, Haitham M


    A sample of 480 (246 boys and 234 girls) students in Grade 11 in the United Arab Emirates completed an Arabic version of the shortened form of the Fennema-Sherman Mathematics Attitudes Scales. A factor analysis of the intercorrelations of responses to 51 items indicated the same general factors as in the original study. Internal consistency estimates of the reliability of scores on the total scale and on each scale for the short form were acceptable, with coefficients alpha ranging from .72 to .89.

  19. Gas cooling in semi-analytic models and SPH simulations: are results consistent?

    Saro, A; Borgani, S; Dolag, K


    We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical SPH simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: a) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; b) while all stars associated with the BCG were formed in its progenitors i...

  20. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  1. Cross-cultural adaptation, reliability, internal consistency and validation of the Hand Function Sort (HFS©) for French speaking patients with upper limb complaints.

    Konzelmann, M; Burrus, C; Hilfiker, R; Rivier, G; Deriaz, O; Luthi, F


    Functional evaluation of upper limb is not only based on clinical findings but requires self-administered questionnaires to address patients' perspective. The Hand Function Sort (HFS©) was only validated in English. The aim of this study was the French cross cultural adaptation and validation of the HFS© (HFS-F). 150 patients with various upper limbs impairments were recruited in a rehabilitation center. Translation and cross-cultural adaptation were made according to international guidelines. Construct validity was estimated through correlations with Disabilities Arm Shoulder and Hand (DASH) questionnaire, SF-36 mental component summary (MCS),SF-36 physical component summary (PCS) and pain intensity. Internal consistency was assessed by Cronbach's α and test-retest reliability by intraclass correlation. Cronbach's α was 0.98, test-retest reliability was excellent at 0.921 (95 % CI 0.871-0.971) same as original HFS©. Correlations with DASH were-0.779 (95 % CI -0.847 to -0.685); with SF 36 PCS 0.452 (95 % CI 0.276-0.599); with pain -0.247 (95 % CI -0.429 to -0.041); with SF 36 MCS 0.242 (95 % CI 0.042-0.422). There were no floor or ceiling effects. The HFS-F has the same good psychometric properties as the original HFS© (internal consistency, test retest reliability, convergent validity with DASH, divergent validity with SF-36 MCS, and no floor or ceiling effects). The convergent validity with SF-36 PCS was poor; we found no correlation with pain. The HFS-F could be used with confidence in a population of working patients. Other studies are necessary to study its psychometric properties in other populations.

  2. Test-Retest Reliability and Validity Results of the Youth Physical Activity Supports Questionnaire

    Sandy Slater


    Full Text Available As youth obesity rates remain at unacceptably high levels, particularly across underserved populations, the promotion of physical activity has become a focus of youth obesity prevention across the United States. Thus, the purpose of this study was to develop and test the reliability and validity of a self-reported questionnaire on home, school, and neighborhood physical activity environments for youth located in low-income urban minority neighborhoods and rural areas. Third-, fourth-, and fifth-grade students and their parents were recruited from six purposively selected elementary schools (three urban and three rural. A total of 205 parent/child dyads completed two waves of a 160-item take-home survey. Test-retest reliability was calculated for the student survey and validity was determined through a combination of parental and school administrator responses, and environmental audits. The majority (90% of measures had good reliability and validity (74%; defined as ≥70% agreement. These measures collected information on the presence of electronic and play equipment in youth participants’ bedrooms and homes, and outdoor play equipment at schools, as well as who youth are active with, and what people close to them think about being active. Measures that consistently had poor reliability and validity (≤70% agreement were weekly activities youth participated in and household rules. Principal components analysis was also used to identify 11 sub-scales. This survey can be used to help identify opportunities and develop strategies to encourage underserved youth to be more physically active.

  3. Reconstruction of scalar field theories realizing inflation consistent with the Planck and BICEP2 results

    Bamba, Kazuharu [Leading Graduate School Promotion Center, Ochanomizu University, 2-1-1 Ohtsuka, Bunkyo-ku, Tokyo 112-8610 (Japan); Department of Physics, Graduate School of Humanities and Sciences, Ochanomizu University, Tokyo 112-8610 (Japan); Nojiri, Shin' ichi [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Nagoya 464-8602 (Japan); Department of Physics, Nagoya University, Nagoya 464-8602 (Japan); Odintsov, Sergei D. [Consejo Superior de Investigaciones Científicas, ICE/CSIC-IEEC, Campus UAB, Facultat de Ciències, Torre C5-Parell-2a pl, E-08193 Bellaterra (Barcelona) (Spain); Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona (Spain); Tomsk State Pedagogical University, 634061 Tomsk (Russian Federation); National Research Tomsk State University, 634050 Tomsk (Russian Federation); King Abdulaziz University, Jeddah (Saudi Arabia)


    We reconstruct scalar field theories to realize inflation compatible with the BICEP2 result as well as the Planck. In particular, we examine the chaotic inflation model, natural (or axion) inflation model, and an inflationary model with a hyperbolic inflaton potential. We perform an explicit approach to find out a scalar field model of inflation in which any observations can be explained in principle.


    Giovanni Francesco Spatola


    Full Text Available The use of image analysis methods has allowed us to obtain more reliable and repro-ducible immunohistochemistry (IHC results. Wider use of such approaches and sim-plification of software allowing a colorimetric study has meant that these methods are available to everyone, and made it possible to standardize the technique by a reliable systems score. Moreover, the recent introduction of multispectral image acquisition systems methods has further refined these techniques, minimizing artefacts and eas-ing the evaluation of the data by the observer.

  5. An assessment of consistence of exhaust gas emission test results obtained under controlled NEDC conditions

    Balawender, K.; Jaworski, A.; Kuszewski, H.; Lejda, K.; Ustrzycki, A.


    Measurements concerning emissions of pollutants contained in automobile combustion engine exhaust gases is of primary importance in view of their harmful impact on the natural environment. This paper presents results of tests aimed at determining exhaust gas pollutant emissions from a passenger car engine obtained under repeatable conditions on a chassis dynamometer. The test set-up was installed in a controlled climate chamber allowing to maintain the temperature conditions within the range from -20°C to +30°C. The analysis covered emissions of such components as CO, CO2, NOx, CH4, THC, and NMHC. The purpose of the study was to assess repeatability of results obtained in a number of tests performed as per NEDC test plan. The study is an introductory stage of a wider research project concerning the effect of climate conditions and fuel type on emission of pollutants contained in exhaust gases generated by automotive vehicles.

  6. Reconstruction of scalar field theories realizing inflation consistent with the Planck and BICEP2 results

    Kazuharu Bamba


    Full Text Available We reconstruct scalar field theories to realize inflation compatible with the BICEP2 result as well as the Planck. In particular, we examine the chaotic inflation model, natural (or axion inflation model, and an inflationary model with a hyperbolic inflaton potential. We perform an explicit approach to find out a scalar field model of inflation in which any observations can be explained in principle.

  7. Three clinical experiences with SNP array results consistent with parental incest: a narrative with lessons learned.

    Helm, Benjamin M; Langley, Katherine; Spangler, Brooke; Vergano, Samantha


    Single nucleotide polymorphism microarrays have the ability to reveal parental consanguinity which may or may not be known to healthcare providers. Consanguinity can have significant implications for the health of patients and for individual and family psychosocial well-being. These results often present ethical and legal dilemmas that can have important ramifications. Unexpected consanguinity can be confounding to healthcare professionals who may be unprepared to handle these results or to communicate them to families or other appropriate representatives. There are few published accounts of experiences with consanguinity and SNP arrays. In this paper we discuss three cases where molecular evidence of parental incest was identified by SNP microarray. We hope to further highlight consanguinity as a potential incidental finding, how the cases were handled by the clinical team, and what resources were found to be most helpful. This paper aims to contribute further to professional discourse on incidental findings with genomic technology and how they were addressed clinically. These experiences may provide some guidance on how others can prepare for these findings and help improve practice. As genetic and genomic testing is utilized more by non-genetics providers, we also hope to inform about the importance of engaging with geneticists and genetic counselors when addressing these findings.

  8. Ceramic material life prediction: A program to translate ANSYS results to CARES/LIFE reliability analysis

    Vonhermann, Pieter; Pintz, Adam


    This manual describes the use of the ANSCARES program to prepare a neutral file of FEM stress results taken from ANSYS Release 5.0, in the format needed by CARES/LIFE ceramics reliability program. It is intended for use by experienced users of ANSYS and CARES. Knowledge of compiling and linking FORTRAN programs is also required. Maximum use is made of existing routines (from other CARES interface programs and ANSYS routines) to extract the finite element results and prepare the neutral file for input to the reliability analysis. FORTRAN and machine language routines as described are used to read the ANSYS results file. Sub-element stresses are computed and written to a neutral file using FORTRAN subroutines which are nearly identical to those used in the NASCARES (MSC/NASTRAN to CARES) interface.

  9. Gearbox Reliability Collaborative Phase 1 and 2: Testing and Modeling Results; Preprint

    Keller, J.; Guo, Y.; LaCava, W.; Link, H.; McNiff, B.


    The Gearbox Reliability Collaborative (GRC) investigates root causes of wind turbine gearbox premature failures and validates design assumptions that affect gearbox reliability using a combined testing and modeling approach. Knowledge gained from the testing and modeling of the GRC gearboxes builds an understanding of how the selected loads and events translate into internal responses of three-point mounted gearboxes. This paper presents some testing and modeling results of the GRC research during Phase 1 and 2. Non-torque loads from the rotor including shaft bending and thrust, traditionally assumed to be uncoupled with gearbox, affect gear and bearing loads and resulting gearbox responses. Bearing clearance increases bearing loads and causes cyclic loading, which could contribute to a reduced bearing life. Including flexibilities of key drivetrain subcomponents is important in order to reproduce the measured gearbox response during the tests using modeling approaches.

  10. Inter-Rater Reliability of Preprocessing EEG Data: Impact of Subjective Artifact Removal on Associative Memory Task ERP Results

    Steven D. Shirk


    Full Text Available The processing of EEG data routinely involves subjective removal of artifacts during a preprocessing stage. Preprocessing inter-rater reliability (IRR and how differences in preprocessing may affect outcomes of primary event-related potential (ERP analyses has not been previously assessed. Three raters independently preprocessed EEG data of 16 cognitively healthy adult participants (ages 18–39 years who performed a memory task. Using intraclass correlations (ICCs, IRR was assessed for Early-frontal, Late-frontal, and Parietal Old/new memory effects contrasts across eight regions of interest (ROIs. IRR was good to excellent for all ROIs; 22 of 26 ICCs were above 0.80. Raters were highly consistent in preprocessing across ROIs, although the frontal pole ROI (ICC range 0.60–0.90 showed less consistency. Old/new parietal effects had highest ICCs with the lowest variability. Rater preprocessing differences did not alter primary ERP results. IRR for EEG preprocessing was good to excellent, and subjective rater-removal of EEG artifacts did not alter primary memory-task ERP results. Findings provide preliminary support for robustness of cognitive/memory task-related ERP results against significant inter-rater preprocessing variability and suggest reliability of EEG to assess cognitive-neurophysiological processes multiple preprocessors are involved.

  11. Ex vivo normothermic machine perfusion is safe, simple, and reliable: results from a large animal model.

    Nassar, Ahmed; Liu, Qiang; Farias, Kevin; D'Amico, Giuseppe; Tom, Cynthia; Grady, Patrick; Bennett, Ana; Diago Uso, Teresa; Eghtesad, Bijan; Kelly, Dympna; Fung, John; Abu-Elmagd, Kareem; Miller, Charles; Quintini, Cristiano


    Normothermic machine perfusion (NMP) is an emerging preservation modality that holds the potential to prevent the injury associated with low temperature and to promote organ repair that follows ischemic cell damage. While several animal studies have showed its superiority over cold storage (CS), minimal studies in the literature have focused on safety, feasibility, and reliability of this technology, which represent key factors in its implementation into clinical practice. The aim of the present study is to report safety and performance data on NMP of DCD porcine livers. After 60 minutes of warm ischemia time, 20 pig livers were preserved using either NMP (n = 15; physiologic perfusion temperature) or CS group (n = 5) for a preservation time of 10 hours. Livers were then tested on a transplant simulation model for 24 hours. Machine safety was assessed by measuring system failure events, the ability to monitor perfusion parameters, sterility, and vessel integrity. The ability of the machine to preserve injured organs was assessed by liver function tests, hemodynamic parameters, and histology. No system failures were recorded. Target hemodynamic parameters were easily achieved and vascular complications were not encountered. Liver function parameters as well as histology showed significant differences between the 2 groups, with NMP livers showing preserved liver function and histological architecture, while CS livers presenting postreperfusion parameters consistent with unrecoverable cell injury. Our study shows that NMP is safe, reliable, and provides superior graft preservation compared to CS in our DCD porcine model. © The Author(s) 2014.

  12. Modified Core Wash Cytology: A reliable same day biopsy result for breast clinics.

    Bulte, J P; Wauters, C A P; Duijm, L E M; de Wilt, J H W; Strobbe, L J A


    Fine Needle Aspiration Biopsy (FNAB), Core Needle biopsy (CNB) and hybrid techniques including Core Wash Cytology (CWC) are available for same-day diagnosis in breast lesions. In CWC a washing of the biopsy core is processed for a provisional cytological diagnosis, after which the core is processed like a regular CNB. This study focuses on the reliability of CWC in daily practice. All consecutive CWC procedures performed in a referral breast centre between May 2009 and May 2012 were reviewed, correlating CWC results with the CNB result, definitive diagnosis after surgical resection and/or follow-up. Symptomatic as well as screen-detected lesions, undergoing CNB were included. 1253 CWC procedures were performed. Definitive histology showed 849 (68%) malignant and 404 (32%) benign lesions. 80% of CWC procedures yielded a conclusive diagnosis: this percentage was higher amongst malignant lesions and lower for benign lesions: 89% and 62% respectively. Sensitivity and specificity of a conclusive CWC result were respectively 98.3% and 90.4%. The eventual incidence of malignancy in the cytological 'atypical' group (5%) was similar to the cytological 'benign' group (6%). CWC can be used to make a reliable provisional diagnosis of breast lesions within the hour. The high probability of conclusive results in malignant lesions makes CWC well suited for high risk populations. Copyright © 2016 Elsevier Ltd, BASO ~ the Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  13. Milk urea analytical result reliability and its methodical possibilities in the Czech Republic

    Jan Říha


    Full Text Available Control of milk urea concentration (MUC can be used in diagnosis of the energy–nitrogen metabolism of cows. There are more analytical methods for MUC estimation and there are also discussions about their result reliability. Aim of this work was to obtain information for MUC result reliability improvement. MUC and MUN (milk urea nitrogen were investigated in 5 milk sample sets and in 7 calibration/comparison experiments. The positions of reference and indirect methods were changed in experiments. There were following analytical methods for MUC or MUN (in mg.100 ml−1: – photometric method (PH, as reference based on paradimethylaminobenzaldehyde reaction; – method Ureakvant (UK, as reference based on difference measurement of the electrical conductivity change during ureolysis; – method Chemspec (CH based on photometrical measurement of ammonia concentration after ureolysis (as reference; – spectroscopic method in mid infrared range of spectrum (FT–MIR; indirect routine method. In all methodical combinations the correlation coefficients (r varied from 0.8803 to 0.9943 (P −1 and comparable values of repeatability (from 0.65 to 1.83 mg.100 ml−1 as compared to FT–MIR MUC or MUN methods (from 1.39 to 5.6 and from 0.76 to 1.92 mg.100 ml−1 in performed experiments.

  14. SMART empirical approaches for predicting field performance of PV modules from results of reliability tests

    Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata


    Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.

  15. Test results of reliable and very high capillary multi-evaporators / condenser loop

    Van Oost, S.; Dubois, M.; Bekaert, G. [Societe Anonyme Belge de Construction Aeronautique - SABCA (Belgium)


    The paper present the results of various SABCA activities in the field of two-phase heat transport system. These results have been based on a critical review and analysis of the existing two-phase loop and of the future loop needs in space applications. The research and the development of a high capillary wick (capillary pressure up to 38 000 Pa) are described. These activities have led towards the development of a reliable high performance capillary loop concept (HPCPL), which is discussed in details. Several loop configurations mono/multi-evaporators have been ground tested. The presented results of various tests clearly show the viability of this concept for future applications. Proposed flight demonstrations as well as potential applications conclude this paper. (authors) 7 refs.

  16. Experimental results of fingerprint comparison validity and reliability: A review and critical analysis.

    Haber, Ralph Norman; Haber, Lyn


    Our purpose in this article is to determine whether the results of the published experiments on the accuracy and reliability of fingerprint comparison can be generalized to fingerprint laboratory casework, and/or to document the error rate of the Analysis-Comparison-Evaluation (ACE) method. We review the existing 13 published experiments on fingerprint comparison accuracy and reliability. These studies comprise the entire corpus of experimental research published on the accuracy of fingerprint comparisons since criminal courts first admitted forensic fingerprint evidence about 120years ago. We start with the two studies by Ulery, Hicklin, Buscaglia and Roberts (2011, 2012), because they are recent, large, designed specifically to provide estimates of the accuracy and reliability of fingerprint comparisons, and to respond to the criticisms cited in the National Academy of Sciences Report (2009). Following the two Ulery et al. studies, we review and evaluate the other eleven experiments, considering problems that are unique to each. We then evaluate the 13 experiments for the problems common to all or most of them, especially with respect to the generalizability of their results to laboratory casework. Overall, we conclude that the experimental designs employed deviated from casework procedures in critical ways that preclude generalization of the results to casework. The experiments asked examiner-subjects to carry out their comparisons using different responses from those employed in casework; the experiments presented the comparisons in formats that differed from casework; the experiments enlisted highly trained examiners as experimental subjects rather than subjects drawn randomly from among all fingerprint examiners; the experiments did not use fingerprint test items known to be comparable in type and especially in difficulty to those encountered in casework; and the experiments did not require examiners to use the ACE method, nor was that method defined

  17. Chronic obstructive pulmonary disease as a cardiovascular risk factor: results of a case-control study (CONSISTE study)


    Michael FalolaDepartment of Epidemiology, University of Alabama at Birmingham, Birmingham, AL, USAI read with interest the article "Chronic obstructive pulmonary disease as a cardiovascular risk factor. Results of a case-control study (CONSISTE study)" by de Lucas-Ramos et al.1 In my opinion, the study did not use case-control design, despite its title.View original paper by de Lucas-Ramos and colleagues.

  18. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming


    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  19. Health search engine with e-document analysis for reliable search results.

    Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine


    After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (, free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.

  20. Largely Reduced Grid Densities in a Vibrational Self-Consistent Field Treatment Do Not Significantly Impact the ResultingWavenumbers

    Oliver M. D. Lutz


    Full Text Available Especially for larger molecules relevant to life sciences, vibrational self-consistent field (VSCF calculations can become unmanageably demanding even when only first and second order potential coupling terms are considered. This paper investigates to what extent the grid density of the VSCF’s underlying potential energy surface can be reduced without sacrificing accuracy of the resulting wavenumbers. Including single-mode and pair contributions, a reduction to eight points per mode did not introduce a significant deviation but improved the computational efficiency by a factor of four. A mean unsigned deviation of 1.3% from the experiment could be maintained for the fifteen molecules under investigation and the approach was found to be applicable to rigid, semi-rigid and soft vibrational problems likewise. Deprotonated phosphoserine, stabilized by two intramolecular hydrogen bonds, was investigated as an exemplary application.

  1. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L. [Pacific Northwest Lab., Richland, WA (United States)


    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  2. [Influences of hostage posting on estimation of trustworthiness: the effects of voluntary posting and reliable results].

    Nakayachi, Kazuya; Watabe, Motoki


    This research examined the effects of providing a monitoring and self-sanctioning system, called "hostage posting" in economics, on the improvement of trustworthiness. We conducted two questionnaire-type experiments to compare the trust-improving effects among the three conditions, (a) a voluntary provision of a monitoring and self-sanction system by the manager, (b) an imposed provision, and (c) an achievement of satisfactory management without any types of provisions. Total of 561 undergraduate students participated in the experiments. Results revealed that perceived integrity and competence were improved to almost the same level in both conditions (a) and (c), whereas these were not improved in condition (b). Consistent with our previous research, these results showed that the voluntary hostage posting improved trustworthiness level as well as a good performance did. The estimation of necessity of the system, however, was not different across these conditions. The implications for management practice and directions for future research were discussed.

  3. The European COPHES/DEMOCOPHES project: towards transnational comparability and reliability of human biomonitoring results.

    Schindler, Birgit Karin; Esteban, Marta; Koch, Holger Martin; Castano, Argelia; Koslitz, Stephan; Cañas, Ana; Casteleyn, Ludwine; Kolossa-Gehring, Marike; Schwedler, Gerda; Schoeters, Greet; Hond, Elly Den; Sepai, Ovnair; Exley, Karen; Bloemen, Louis; Horvat, Milena; Knudsen, Lisbeth E; Joas, Anke; Joas, Reinhard; Biot, Pierre; Aerts, Dominique; Lopez, Ana; Huetos, Olga; Katsonouri, Andromachi; Maurer-Chronakis, Katja; Kasparova, Lucie; Vrbík, Karel; Rudnai, Peter; Naray, Miklos; Guignard, Cedric; Fischer, Marc E; Ligocka, Danuta; Janasik, Beata; Reis, M Fátima; Namorado, Sónia; Pop, Cristian; Dumitrascu, Irina; Halzlova, Katarina; Fabianova, Eleonora; Mazej, Darja; Tratnik, Janja Snoj; Berglund, Marika; Jönsson, Bo; Lehmann, Andrea; Crettaz, Pierre; Frederiksen, Hanne; Nielsen, Flemming; McGrath, Helena; Nesbitt, Ian; De Cremer, Koen; Vanermen, Guido; Koppen, Gudrun; Wilhelm, Michael; Becker, Kerstin; Angerer, Jürgen


    between 18.9 and 45.3% for the phthalate metabolites. Plausibility control of the HBM results of all participating countries disclosed analytical shortcomings in the determination of Cd when using certain ICP/MS methods. Results were corrected by reanalyzes. The COPHES/DEMOCOPHES project for the first time succeeded in performing a harmonized pan-European HBM project. All data raised have to be regarded as utmost reliable according to the highest international state of the art, since highly renowned laboratories functioned as reference laboratories. The procedure described here, that has shown its success, can be used as a blueprint for future transnational, multicentre HBM projects.

  4. The Reliability of Results from National Tests, Public Examinations, and Vocational Qualifications in England

    He, Qingping; Opposs, Dennis


    National tests, public examinations, and vocational qualifications in England are used for a variety of purposes, including the certification of individual learners in different subject areas and the accountability of individual professionals and institutions. However, there has been ongoing debate about the reliability and validity of their…

  5. Automated Energy Distribution and Reliability System: Validation Integration - Results of Future Architecture Implementation

    Buche, D. L.


    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.

  6. Electrogastrographic norms in children: toward the development of standard methods, reproducible results, and reliable normative data.

    Levy, J; Harris, J; Chen, J; Sapoznikov, D; Riley, B; De La Nuez, W; Khaskelberg, A


    Surface electrogastrography (EGG) is a noninvasive technique that detects gastric myoelectrical electric activity, principally the underlying pacemaker activity generated by the specialized interstitial cells of Cajal. Interest in the use of this methodology has grown because of its potential applications in describing functional gastrointestinal disorders, particularly as a tool in the evaluation of nausea, anorexia, and other dyspeptic symptoms. Fifty-five healthy volunteers (27 female), ranging in age from 6 to 18 years (mean, 11.7 years), were studied for a 1-hour baseline preprandial period and a 1-hour postprandial period after consumption of a standard 448-kcal meal. Recordings were obtained with an EGG Digitrapper or modified Polygraph (Medtronic-Synectics, Shoreview, MN). Spectral analysis by an autoregressive moving average method was used to extract numerical data on the power and frequency of gastric electrical activity from the EGG signal. The authors present normative data for healthy children and adolescents studied under a standardized protocol. Mean dominant frequency was found to be 2.9 +/- 0.40 cycles per minute preprandially and 3.1 +/- 0.35 postprandially, with 80% +/- 13% of test time spent in the normogastric range (2-4 cycles per minute) before and 85% +/- 11% after the test meal. The response of several key parameters to meal consumption was considered, and the effects of age, gender, and body mass index (BMI) on the EGG were sought. There is a postprandial increase in the rhythmicity and amplitude of gastric slow waves, as other investigators have shown in adults. Key normative values are not dependent on age, gender, or BMI. The authors discuss limitations in the data set and its interpretability. The authors establish a normative data set after developing a standardized recording protocol and test meal and show that EGG recordings can be obtained reliably in the pediatric population. Development of similar norms by investigators using

  7. Thermodynamic prediction of glycine polymerization as a function of temperature and pH consistent with experimentally obtained results.

    Kitadai, Norio


    Prediction of the thermodynamic behaviors of biomolecules at high temperature and pressure is fundamental to understanding the role of hydrothermal systems in the origin and evolution of life on the primitive Earth. However, available thermodynamic dataset for amino acids, essential components for life, cannot represent experimentally observed polymerization behaviors of amino acids accurately under hydrothermal conditions. This report presents the thermodynamic data and the revised HKF parameters for the simplest amino acid "Gly" and its polymers (GlyGly, GlyGlyGly and DKP) based on experimental thermodynamic data from the literature. Values for the ionization states of Gly (Gly(+) and Gly(-)) and Gly peptides (GlyGly(+), GlyGly(-), GlyGlyGly(+), and GlyGlyGly(-)) were also retrieved from reported experimental data by combining group additivity algorithms. The obtained dataset enables prediction of the polymerization behavior of Gly as a function of temperature and pH, consistent with experimentally obtained results in the literature. The revised thermodynamic data for zwitterionic Gly, GlyGly, and DKP were also used to estimate the energetics of amino acid polymerization into proteins. Results show that the Gibbs energy necessary to synthesize a mole of peptide bond is more than 10 kJ mol(-1) less than previously estimated over widely various temperatures (e.g., 28.3 kJ mol(-1) → 17.1 kJ mol(-1) at 25 °C and 1 bar). Protein synthesis under abiotic conditions might therefore be more feasible than earlier studies have shown.

  8. [A systematic social observation tool: methods and results of inter-rater reliability].

    Freitas, Eulilian Dias de; Camargos, Vitor Passos; Xavier, César Coelho; Caiaffa, Waleska Teixeira; Proietti, Fernando Augusto


    Systematic social observation has been used as a health research methodology for collecting information from the neighborhood physical and social environment. The objectives of this article were to describe the operationalization of direct observation of the physical and social environment in urban areas and to evaluate the instrument's reliability. The systematic social observation instrument was designed to collect information in several domains. A total of 1,306 street segments belonging to 149 different neighborhoods in Belo Horizonte, Minas Gerais, Brazil, were observed. For the reliability study, 149 segments (1 per neighborhood) were re-audited, and Fleiss kappa was used to access inter-rater agreement. Mean agreement was 0.57 (SD = 0.24); 53% had substantial or almost perfect agreement, and 20.4%, moderate agreement. The instrument appears to be appropriate for observing neighborhood characteristics that are not time-dependent, especially urban services, property characterization, pedestrian environment, and security.

  9. Self-Consistent Model of Magnetospheric Electric Field, Ring Current, Plasmasphere, and Electromagnetic Ion Cyclotron Waves: Initial Results

    Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.


    Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.

  10. Validity and reliability of patient reported outcomes used in Psoriasis: results from two randomized clinical trials

    Koo John; Thompson Christine; Stone Stephen P; Bresnahan Brian W; Shikiar Richard; Revicki Dennis A


    Abstract Background Two Phase III randomized controlled clinical trials were conducted to assess the efficacy, safety, and tolerability of weekly subcutaneous administration of efalizumab for the treatment of psoriasis. Patient reported measures of psoriasis-related functionality and health-related quality of life and of psoriasis-related symptom assessments were included as part of the trials. Objective To assess the reliability, validity, and responsiveness of the patient reported outcome m...

  11. Reliability and Concurrent Validation of the IPIP Big-Five Factor Markers in China: Consistencies in Factor Structure between Internet-Obtained Heterosexual and Homosexual Samples

    Zheng, Lijun; Goldberg, Lewis R.; Zheng, Yong; Zhao, Yufang; Tang, Yonglong; Liu, Li


    Previous studies have suggested the cross-cultural generalizability of a 5-factor structure for personality traits. In this article, we analyzed the utility of 2 versions (100-item and 50-item) of the IPIP Big-Five factor markers in both heterosexual (N = 633) and homosexual (N = 437) samples in China. Factor analysis within versions showed that both versions of these IPIP measures showed clear 5-factor orthogonal structures that were nearly identical to the American structure in both subject samples. The reliabilities of the five factors were quite high except for the 50-item measure of Agreeableness. The part-whole correlations between the 100-item and 50-item factors were high, as were the factor congruence coefficients between the heterosexual and the homosexual samples. Both versions of the IPIP Big-Five factor markers were strongly correlated with the scales from the Big Five Inventory (BFI: John, Donahue & Kentle, 1991), thus providing some concurrent validation in a Chinese context. PMID:20383283

  12. Chronic obstructive pulmonary disease as a cardiovascular risk factor. Results of a case–control study (CONSISTE study

    de Lucas-Ramos P


    Full Text Available Pilar de Lucas-Ramos,1,* Jose Luis Izquierdo-Alonso,2,* Jose Miguel Rodriguez-Gonzalez Moro,1 Jesus Fernandez Frances,2 Paz Vaquero Lozano,1 Jose M Bellón-Cano1,3 CONSISTE study group1Servicio de Neumologia, Hospital General Universitario Gregorio Maranon, Madrid, 2Servicio de Neumologia, Hospital Universitario de Guadalajara, Guadalajara, 3Unidad de Investigacion, Hospital General Universitario Gregorio Maranon, Madrid, Spain*These authors contributed equally to this workIntroduction: Chronic obstructive pulmonary disease (COPD patients present a high prevalence of cardiovascular disease. This excess of comorbidity could be related to a common pathogenic mechanism, but it could also be explained by the existence of common risk factors. The objective of this study was to determine whether COPD patients present greater cardiovascular comorbidity than control subjects and whether COPD can be considered a risk factor per se.Methods: 1200 COPD patients and 300 control subjects were recruited for this multicenter, cross-sectional, case–control study.Results: Compared with the control group, the COPD group showed a significantly higher prevalence of ischemic heart disease (12.5% versus 4.7%; P < 0.0001, cerebrovascular disease (10% versus 2%; P < 0.0001, and peripheral vascular disease (16.4% versus 4.1%; P < 0.001. In the univariate risk analysis, COPD, hypertension, diabetes, obesity, and dyslipidemia were risk factors for ischemic heart disease. In the multivariate analysis adjusted for the remaining factors, COPD was still an independent risk factor (odds ratio: 2.23; 95% confidence interval: 1.18–4.24; P = 0.014.Conclusion: COPD patients show a high prevalence of cardiovascular disease, higher than expected given their age and the coexistence of classic cardiovascular risk factors.Keywords: COPD, cardiovascular risk, ischemic heart disease

  13. Demands placed on waste package performance testing and modeling by some general results on reliability analysis

    Chesnut, D.A.


    Waste packages for a US nuclear waste repository are required to provide reasonable assurance of maintaining substantially complete containment of radionuclides for 300 to 1000 years after closure. The waiting time to failure for complex failure processes affecting engineered or manufactured systems is often found to be an exponentially-distributed random variable. Assuming that this simple distribution can be used to describe the behavior of a hypothetical single barrier waste package, calculations presented in this paper show that the mean time to failure (the only parameter needed to completely specify an exponential distribution) would have to be more than 10{sub 7} years in order to provide reasonable assurance of meeting this requirement. With two independent barriers, each would need to have a mean time to failure of only 10{sup 5} years to provide the same reliability. Other examples illustrate how multiple barriers can provide a strategy for not only achieving but demonstrating regulatory compliance.

  14. Investigating performance, reliability and safety parameters of photovoltaic module inverter: Test results and compliances with the standards

    Islam, Saiful; Belmans, Ronnie [Department of Electrical Engineering, Katholieke Universiteit Leuven, ESAT/ELECTA, Kasteelpark Arenberg 10, B-3001 Leuven (Belgium); Woyte, Achim [Verenigingsstraat 39, B-1000 Brussel (Belgium); Heskes, P.J.M.; Rooij, P.M. [Energy Research Centre of the Netherlands ECN, P.O. Box 1, 1755 ZG Petten (Netherlands)


    Reliability, safety and quality requirements for a new type of photovoltaic module inverter have been identified and its performance has been evaluated for prototypes. The laboratory tests have to show whether the so-called second generation photovoltaic module inverter can comply with the expectations and where improvements are still necessary. Afterwards, the test results have been compared with the internationals standards. (author)

  15. Event-Based Operational Semantics and a Consistency Result for Real-Time Concurrent Processes with Action Refinement

    Xiu-Li Sun; Wen-Yin Zhang; Jin-Zhao Wu


    In this paper an event-based operational interleaving semantics is proposed for real-time processes, for which action refinement and a denotational true concurrency semantics are developed and defined in terms of timed event structures. The authors characterize the timed event traces that are generated by the operational semantics in a denotational way, and show that this operational semantics is consistent with the denotational semantics in the sense that they generate the same set of timed event traces, thereby eliminating the gap between the true concurrency and interleaving semantics.

  16. No evidence for consistent long-term growth stimulation of 13 tropical tree species: results from tree-ring analysis.

    Groenendijk, Peter; van der Sleen, Peter; Vlam, Mart; Bunyavejchewin, Sarayudh; Bongers, Frans; Zuidema, Pieter A


    The important role of tropical forests in the global carbon cycle makes it imperative to assess changes in their carbon dynamics for accurate projections of future climate-vegetation feedbacks. Forest monitoring studies conducted over the past decades have found evidence for both increasing and decreasing growth rates of tropical forest trees. The limited duration of these studies restrained analyses to decadal scales, and it is still unclear whether growth changes occurred over longer time scales, as would be expected if CO2 -fertilization stimulated tree growth. Furthermore, studies have so far dealt with changes in biomass gain at forest-stand level, but insights into species-specific growth changes - that ultimately determine community-level responses - are lacking. Here, we analyse species-specific growth changes on a centennial scale, using growth data from tree-ring analysis for 13 tree species (~1300 trees), from three sites distributed across the tropics. We used an established (regional curve standardization) and a new (size-class isolation) growth-trend detection method and explicitly assessed the influence of biases on the trend detection. In addition, we assessed whether aggregated trends were present within and across study sites. We found evidence for decreasing growth rates over time for 8-10 species, whereas increases were noted for two species and one showed no trend. Additionally, we found evidence for weak aggregated growth decreases at the site in Thailand and when analysing all sites simultaneously. The observed growth reductions suggest deteriorating growth conditions, perhaps due to warming. However, other causes cannot be excluded, such as recovery from large-scale disturbances or changing forest dynamics. Our findings contrast growth patterns that would be expected if elevated CO2 would stimulate tree growth. These results suggest that commonly assumed growth increases of tropical forests may not occur, which could lead to erroneous

  17. Korean round-robin result for new international program to assess the reliability of emerging nondestructive techniques

    Kim, Kyung Cho; Kim, Jin Gyum; Kang, Sung Sik; Jhung, Myung Jo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)


    The Korea Institute of Nuclear Safety, as a representative organization of Korea, in February 2012 participated in an international Program to Assess the Reliability of Emerging Nondestructive Techniques initiated by the U.S. Nuclear Regulatory Commission. The goal of the Program to Assess the Reliability of Emerging Nondestructive Techniques is to investigate the performance of emerging and prospective novel nondestructive techniques to find flaws in nickel-alloy welds and base materials. In this article, Korean round-robin test results were evaluated with respect to the test blocks and various nondestructive examination techniques. The test blocks were prepared to simulate large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds in nuclear power plants. Also, lessons learned from the Korean round-robin test were summarized and discussed.

  18. Korean Round-Robin Tests Result for New International Program to Assess the Reliability of Emerging Nondestructive Techniques

    Kyung Cho Kim


    Full Text Available The Korea Institute of Nuclear Safety, as a representative organization of Korea, in February 2012 participated in an international Program to Assess the Reliability of Emerging Nondestructive Techniques initiated by the U.S. Nuclear Regulatory Commission. The goal of the Program to Assess the Reliability of Emerging Nondestructive Techniques is to investigate the performance of emerging and prospective novel nondestructive techniques to find flaws in nickel-alloy welds and base materials. In this article, Korean round-robin test results were evaluated with respect to the test blocks and various nondestructive examination techniques. The test blocks were prepared to simulate large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds in nuclear power plants. Also, lessons learned from the Korean round-robin test were summarized and discussed.

  19. Resazurin Live Cell Assay: Setup and Fine-Tuning for Reliable Cytotoxicity Results.

    Rodríguez-Corrales, José Á; Josan, Jatinder S


    In vitro cytotoxicity tests allow for fast and inexpensive screening of drug efficacy prior to in vivo studies. The resazurin assay (commercialized as Alamar Blue(®)) has been extensively utilized for this purpose in 2D and 3D cell cultures, and high-throughput screening. However, improper or lack of assay validation can generate unreliable results and limit reproducibility. Herein, we report a detailed protocol for the optimization of the resazurin assay to determine relevant analytical (limits of detection, quantification, and linear range) and biological (growth kinetics) parameters, and, thus, provide accurate cytotoxicity results. Fine-tuning of the resazurin assay will allow accurate and fast quantification of cytotoxicity for drug discovery. Unlike more complicated methods (e.g., mass spectrometry), this assay utilizes fluorescence spectroscopy and, thus, provides a less costly alternative to observe changes in the reductase proteome of the cells.

  20. Interobserver reliability in musculoskeletal ultrasonography: results from a "Teach the Teachers" rheumatologist course

    Naredo, ee.; Møller, I.; Moragues, C.


    : The shoulder, wrist/hand, ankle/foot, or knee of 24 patients with rheumatic diseases were evaluated by 23 musculoskeletal ultrasound experts from different European countries randomly assigned to six groups. The participants did not reach consensus on scanning method or diagnostic criteria before......, tendon lesions, bursitis, and power Doppler signal. Afterwards they compared the ultrasound findings and re-examined the patients together while discussing their results. RESULTS: Overall agreements were 91% for joint effusion/synovitis and tendon lesions, 87% for cortical abnormalities, 84......% for tenosynovitis, 83.5% for bursitis, and 83% for power Doppler signal; kappa values were good for the wrist/hand and knee (0.61 and 0.60) and fair for the shoulder and ankle/foot (0.50 and 0.54). The principal differences in scanning method and diagnostic criteria between experts were related to dynamic...

  1. Predictive validity and reliability of the Turkish version of the risk assessment pressure sore scale in intensive care patients: results of a prospective study.

    Günes, Ülkü Yapucu; Efteli, Elçin


    Multiple pressure ulcer (PU) risk assessment instruments have been developed and tested, but there is no general consensus on which instrument to use for specific patient populations and care settings. The purpose of this study was to determine the reliability and predictive validity of the Turkish version of the Risk Assessment Pressure Sore (RAPS) instrument, which includes 12 variables--5 from the modified Norton Scale, 3 from the Braden Scale, and 3 from other research results--for use in intensive care unit (ICU) patients. The English version of the RAPS instrument was translated into Turkish and tested for internal consistency and predictive validity (sensitivity, specificity, positive predictive value, and negative predictive value) using a convenience sample of 122 patients consecutively admitted to an ICU unit in Turkey. The patients were assessed within 24 hours of admission, and after that, once a week until the development of a PU or discharge from the unit. The incidence of PUs in this population was 23%. The majority of ulcers that developed were Stage I. Internal consistency of the RAPS tool was adequate (Cronbach's α = 0.81). The best balance between sensitivity and specificity for ICU patients was reached at a cut-off point of ≤ 27 (ie, sensitivity = 74.2%, specificity = 31.8%, positive predictive value = 38.7%, and negative predictive value 91.3%). This is lower than the cut-off point reported in other studies of the RAPS scale. In this population of ICU patients, the RAPS scale was found to have acceptable reliability and poor validity. Additional studies to evaluate the predictive validity and reliability of the RAPS scale in other patient populations and care settings are needed.

  2. The extent of food waste generation across EU-27: different calculation methods and the reliability of their results.

    Bräutigam, Klaus-Rainer; Jörissen, Juliane; Priefer, Carmen


    The reduction of food waste is seen as an important societal issue with considerable ethical, ecological and economic implications. The European Commission aims at cutting down food waste to one-half by 2020. However, implementing effective prevention measures requires knowledge of the reasons and the scale of food waste generation along the food supply chain. The available data basis for Europe is very heterogeneous and doubts about its reliability are legitimate. This mini-review gives an overview of available data on food waste generation in EU-27 and discusses their reliability against the results of own model calculations. These calculations are based on a methodology developed on behalf of the Food and Agriculture Organization of the United Nations and provide data on food waste generation for each of the EU-27 member states, broken down to the individual stages of the food chain and differentiated by product groups. The analysis shows that the results differ significantly, depending on the data sources chosen and the assumptions made. Further research is much needed in order to improve the data stock, which builds the basis for the monitoring and management of food waste.

  3. Versão abreviada da Escala Triangular do Amor: evidências de validade fatorial e consistência interna Brief version of the Triangular Love Scale: evidences of factor validity and reliability

    Valdiney Veloso Gouveia


    Full Text Available O presente estudo teve como objetivo conhecer os parâmetros psicométricos de uma versão reduzida da Escala Triangular do Amor. Especificamente, procurou-se reunir evidências de sua validade fatorial e consistência interna no contexto paraibano. Participaram 307 estudantes universitários da cidade de João Pessoa (PB, que mantinham um relacionamento heterossexual estável, com idade média de 23,4 anos (dp = 6,22; amplitude de 17 a 56 anos. A maioria foi do sexo feminino (69,4% e solteira (73%. Os participantes responderam a Escala Triangular do Amor e perguntas demográficas. Os resultados apoiaram a adequação psicométrica dessa medida, que apresentou três componentes (rotação varimax; explicando 67,7% da variância total, com Alfas de Cronbach satisfatórios: compromisso (α = 0,88, paixão (α = 0,87 e intimidade (α = 0,86. As mulheres pontuaram mais em compromisso do que fizeram os homens. Estes resultados foram discutidos à luz da literatura, confirmando-se a adequação desta medida. Pesquisas futuras foram sugeridas.This study aimed at knowing the psychometric parameters of a shorted version of the Triangular Love Scale. Specifically, it tried to joint evidences of its factor validity and reliability in the Paraibano milieu. Participants were 307 undergraduate students from João Pessoa (PB, which have a stable heterosexual relationship. Their mean age was 23.4 years old (sd = 6.22; ranging from 17 to 56, most of them female (69.4% and single (73%. They answered the Triangular Love Scale and demographic asks. Results support the psychometric adequacy of this measure, which showed three components (varimax rotation; accounting for 67.7% of the total variance, with satisfactory Cronbach's Alpha: commitment (α = 0.88, passion (α = 0.87, and intimacy (α = 0.86. Women scored higher than men in commitment. These findings were discussed based on literature, confirming the adequacy of the scale. Future studies were suggested too.

  4. U—Series Dating of Fossil Bones:Results from Chinese Sites and Discussions on Its Reliability



    Calculations,according to some open-system models,point out that while a statistically significant discrepancy between the results of two U-series methods,230Th/234U and 227Th/220Th(or 231Pa/235U),attests a relatively recent and important uranium migration,concordant dates cannot guarantee closes-system behavior of sample.The results of 20 fossil bones from 10 Chinese sites,19 of which are determined by two U-series methods,are given,Judging from independent age controls,8 out of the 11 concordant age sets are unacceptable,The results in this paper suggest that uranium may cycle into or out of fossil bones,such geochemical events may take place at any time and no known preserving condition may securely protect them from being affected.So for the sitew we have studied,the U-series dating of fossil bones is of limited reliability.

  5. Interface Consistency

    Staunstrup, Jørgen


    This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....

  6. Adverse drug events in older hospitalized patients: results and reliability of a comprehensive and structured identification strategy.

    Joanna E Klopotowska

    Full Text Available BACKGROUND: Older patients are at high risk for experiencing Adverse Drug Events (ADEs during hospitalization. To be able to reduce ADEs in these vulnerable patients, hospitals first need to measure the occurrence of ADEs, especially those that are preventable. However, data on preventable ADEs (pADEs occurring during hospitalization in older patients are scarce, and no 'gold standard' for the identification of ADEs exists. METHODOLOGY: The study was conducted in three hospitals in the Netherlands in 2007. ADEs were retrospectively identified by a team of experts using a comprehensive and structured patient chart review (PCR combined with a trigger-tool as an aid. This ADE identification strategy was applied to a cohort of 250 older hospitalized patients. To estimate the intra- and inter-rater reliabilities, Cohen's kappa values were calculated. PRINCIPAL FINDINGS: In total, 118 ADEs were detected which occurred in 62 patients. This ADE yield was 1.1 to 2.7 times higher in comparison to other ADE studies in older hospitalized patients. Of the 118 ADEs, 83 (70.3% were pADEs; 51 pADEs (43.2% of all ADEs identified caused serious patient harm. Patient harm caused by ADEs resulted in various events. The overall intra-rater agreement of the developed strategy was substantial (κ = 0.74; the overall inter-rater agreement was only fair (κ = 0.24. CONCLUSIONS/SIGNIFICANCE: The ADE identification strategy provided a detailed insight into the scope of ADEs occurring in older hospitalized patients, and showed that the majority of (serious ADEs can be prevented. Several strategy related aspects, as well as setting/study specific aspects, may have contributed to the results gained. These aspects should be considered whenever ADE measurements need to be conducted. The results regarding pADEs can be used to design tailored interventions to effectively reduce harm caused by medication errors. Improvement of the inter-rater reliability of a PCR remains

  7. Solid consistency

    Bordin, Lorenzo; Creminelli, Paolo; Mirbabayi, Mehrdad; Noreña, Jorge


    We argue that isotropic scalar fluctuations in solid inflation are adiabatic in the super-horizon limit. During the solid phase this adiabatic mode has peculiar features: constant energy-density slices and comoving slices do not coincide, and their curvatures, parameterized respectively by ζ and Script R, both evolve in time. The existence of this adiabatic mode implies that Maldacena's squeezed limit consistency relation holds after angular average over the long mode. The correlation functions of a long-wavelength spherical scalar mode with several short scalar or tensor modes is fixed by the scaling behavior of the correlators of short modes, independently of the solid inflation action or dynamics of reheating.

  8. Model specification and the reliability of fMRI results: implications for longitudinal neuroimaging studies in psychiatry.

    Jay C Fournier

    Full Text Available Functional Magnetic Resonance Imagine (fMRI is an important assessment tool in longitudinal studies of mental illness and its treatment. Understanding the psychometric properties of fMRI-based metrics, and the factors that influence them, will be critical for properly interpreting the results of these efforts. The current study examined whether the choice among alternative model specifications affects estimates of test-retest reliability in key emotion processing regions across a 6-month interval. Subjects (N = 46 performed an emotional-faces paradigm during fMRI in which neutral faces dynamically morphed into one of four emotional faces. Median voxelwise intraclass correlation coefficients (mvICCs were calculated to examine stability over time in regions showing task-related activity as well as in bilateral amygdala. Four modeling choices were evaluated: a default model that used the canonical hemodynamic response function (HRF, a flexible HRF model that included additional basis functions, a modified CompCor (mCompCor model that added corrections for physiological noise in the global signal, and a final model that combined the flexible HRF and mCompCor models. Model residuals were examined to determine the degree to which each pipeline met modeling assumptions. Results indicated that the choice of modeling approaches impacts both the degree to which model assumptions are met and estimates of test-retest reliability. ICC estimates in the visual cortex increased from poor (mvICC = 0.31 in the default pipeline to fair (mvICC = 0.45 in the full alternative pipeline - an increase of 45%. In nearly all tests, the models with the fewest assumption violations generated the highest ICC estimates. Implications for longitudinal treatment studies that utilize fMRI are discussed.

  9. Establishing Reliable Cognitive Change in Children with Epilepsy: The Procedures and Results for a Sample with Epilepsy

    van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan


    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…

  10. Establishing reliable cognitive change in children with epilepsy: The procedures and results for a sample with epilepsy

    van Iterson, L.; Augustijn, P.B.; de Jong, P.F.; van der Leij, A.


    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a referenc

  11. Establishing Reliable Cognitive Change in Children with Epilepsy: The Procedures and Results for a Sample with Epilepsy

    van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan


    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…

  12. Chemical composition analysis and product consistency tests to support enhanced Hanford waste glass models: Results for the January, March, and April 2015 LAW glasses

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Riley, W. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Best, D. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)


    In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for several simulated low activity waste (LAW) glasses (designated as the January, March, and April 2015 LAW glasses) fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions.

  13. Chemical composition analysis and product consistency tests to support Enhanced Hanford Waste Glass Models. Results for the Augusta and October 2014 LAW Glasses

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Best, D. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)


    In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for several simulated low activity waste (LAW) glasses (designated as the August and October 2014 LAW glasses) fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions.

  14. What is the optimal duration of middle-cerebral artery occlusion consistently resulting in isolated cortical selective neuronal loss in the spontaneously hypertensive rat?

    Sohail eEjaz


    Full Text Available Introduction and Objectives: Selective neuronal loss (SNL in the reperfused penumbra may impact clinical recovery and is thus important to investigate. Brief proximal middle cerebral artery occlusion (MCAo results in predominantly striatal SNL, yet cortical damage is more relevant given its behavioral implications and that thrombolytic therapy mainly rescues the cortex. Distal temporary MCAo (tMCAo does target the cortex, but the optimal occlusion duration that results in isolated SNL has not been determined. In the present study we assessed different distal tMCAo durations looking for consistently pure SNL.Methods: Microclip distal tMCAo (md-tMCAo was performed in ~6-month old male spontaneously hypertensive rats (SHRs. We previously reported that 45min md-tMCAo in SHRs results in pan-necrosis in the majority of subjects. Accordingly, three shorter MCAo durations were investigated here in decremental succession, namely 30, 22 and 15mins (n=3, 3 and 7 subjects, respectively. Recanalization was confirmed by MR angiography just prior to brain collection at 28 days and T2-weighted MRI was obtained for characterization of ischemic lesions. NeuN, OX42 and GFAP immunohistochemistry appraised changes in neurons, microglia and astrocytes, respectively. Ischemic lesions were categorized into three main types: 1 pan-necrosis; 2 partial infarction; and 3 SNL. Results: Pan-necrosis or partial infarction was present in all 30min and 22min subjects, but not in the 15min group (p < 0.001, in which isolated cortical SNL was consistently present. MRI revealed characteristic hyperintense abnormalities in all rats with pan-necrosis or partial infarction, but no change in any 15min subject. Conclusions: We found that 15min distal MCAo consistently resulted in pure cortical SNL, whereas durations equal or longer than 22min consistently resulted in infarcts. This model may be of use to study the pathophysiology of cortical SNL and its prevention by appropriate

  15. Adverse Drug Events in Older Hospitalized Patients: Results and Reliability of a Comprehensive and Structured Identification Strategy

    Klopotowska, Joanna E.; Wierenga, Peter C.; Stuijt, Clementine C. M.; Arisz, Lambertus; Dijkgraaf, Marcel G. W.; Kuks, Paul F. M.; Asscheman, Henk; de Rooij, Sophia E.; Lie-A-Huen, Loraine; Smorenburg, Susanne M.


    Background Older patients are at high risk for experiencing Adverse Drug Events (ADEs) during hospitalization. To be able to reduce ADEs in these vulnerable patients, hospitals first need to measure the occurrence of ADEs, especially those that are preventable. However, data on preventable ADEs (pADEs) occurring during hospitalization in older patients are scarce, and no ‘gold standard’ for the identification of ADEs exists. Methodology The study was conducted in three hospitals in the Netherlands in 2007. ADEs were retrospectively identified by a team of experts using a comprehensive and structured patient chart review (PCR) combined with a trigger-tool as an aid. This ADE identification strategy was applied to a cohort of 250 older hospitalized patients. To estimate the intra- and inter-rater reliabilities, Cohen’s kappa values were calculated. Principal Findings In total, 118 ADEs were detected which occurred in 62 patients. This ADE yield was 1.1 to 2.7 times higher in comparison to other ADE studies in older hospitalized patients. Of the 118 ADEs, 83 (70.3%) were pADEs; 51 pADEs (43.2% of all ADEs identified) caused serious patient harm. Patient harm caused by ADEs resulted in various events. The overall intra-rater agreement of the developed strategy was substantial (κ = 0.74); the overall inter-rater agreement was only fair (κ = 0.24). Conclusions/Significance The ADE identification strategy provided a detailed insight into the scope of ADEs occurring in older hospitalized patients, and showed that the majority of (serious) ADEs can be prevented. Several strategy related aspects, as well as setting/study specific aspects, may have contributed to the results gained. These aspects should be considered whenever ADE measurements need to be conducted. The results regarding pADEs can be used to design tailored interventions to effectively reduce harm caused by medication errors. Improvement of the inter-rater reliability of a PCR remains

  16. Chip Multithreaded Consistency Model

    Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang


    Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.

  17. Escala de bem-estar afetivo no trabalho (Jaws: evidências de validade fatorial e consistência interna Job-related affective well-being scale (Jaws: evidences of factor validity and reliability

    Valdiney Veloso Gouveia


    Full Text Available O objetivo deste estudo foi adaptar uma medida de bem-estar afetivo no trabalho para o contexto brasileiro. Especificamente, pretendeu-se conhecer evidências de validade fatorial e consistência interna da Job-Related Affective Well-Being Scale (JAWS, avaliando se as pontuações nos seus fatores diferem em função do gênero e da idade dos participantes. Participaram 298 trabalhadores de centros comerciais de pequeno e médio porte da cidade de João Pessoa (PB. A maioria destes era do sexo feminino (76,8%, com idade média de 26 anos (DP = 6,87. Através de uma análise dos componentes principais (rotação promax foram identificados dois fatores que explicaram conjuntamente 48,1% da variância total: afetos positivos (α = 0,94; 14 itens e afetos negativos (α = 0,87; 13 itens; um fator geral de bem-estar afetivo no trabalho foi também computado (α = 0,95; 27 itens. As pontuações dos participantes nestes fatores não foram influenciadas pelas variáveis gênero e idade. Estes resultados são discutidos à luz do que tem sido escrito sobre os parâmetros desta escala e da relação dos afetos com estas variáveis demográficas.This study aimed at adapting a measure of job-related affective well-being for the Brazilian milieu. Specifically, it was proposed to know evidences of factor validity and reliability of the Job-Related Affective Well-Being Scale (JAWS, assessing if its scores are influenced by participants' gender and age. The participants were 298 individuals employed in small or middle shopping malls in the city of João Pessoa, PB; most of them were female (76.8%, with a mean age of 26 years old (SD = 6.87. A main component analysis (with promax rotation was performed, revealing two components that jointly accounted for 48.1% of the total variance. They were named as positive affect (α = .94; 14 items and negative affect (α = .87; 13 items. A general factor of affective well-being was also identified (α = .95; 27 items

  18. Evaluating Proposed Investments in Power System Reliability and Resilience: Preliminary Results from Interviews with Public Utility Commission Staff

    LaCommare, Kristina [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Larsen, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Eto, Joseph [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)


    Policymakers and regulatory agencies are expressing renewed interest in the reliability and resilience of the U.S. electric power system in large part due to growing recognition of the challenges posed by climate change, extreme weather events, and other emerging threats. Unfortunately, there has been little or no consolidated information in the public domain describing how public utility/service commission (PUC) staff evaluate the economics of proposed investments in the resilience of the power system. Having more consolidated information would give policymakers a better understanding of how different state regulatory entities across the U.S. make economic decisions pertaining to reliability/resiliency. To help address this, Lawrence Berkeley National Laboratory (LBNL) was tasked by the U.S. Department of Energy Office of Energy Policy and Systems Analysis (EPSA) to conduct an initial set of interviews with PUC staff to learn more about how proposed utility investments in reliability/resilience are being evaluated from an economics perspective. LBNL conducted structured interviews in late May-early June 2016 with staff from the following PUCs: Washington D.C. (DCPSC), Florida (FPSC), and California (CPUC).

  19. Reliability of the Functional Reach Test and the influence of anthropometric characteristics on test results in subjects with hemiparesis.

    Martins, Emerson Fachin; de Menezes, Lidiane Teles; de Sousa, Pedro Henrique Côrtes; de Araujo Barbosa, Paulo Henrique Ferreira; Costa, Abraão Souza


    First designed as an alternative method of assessing balance and susceptibility to falls among elderly, the Functional Reach Test (FR) has also been used among patients with hemiparesis. Then this study aimed to describe the intra- and inter-rater and the test/re-test reliability of the FR measure in subjects with and without hemiparesis while verifying anthropometric influences on the measurements. The FR was administered to a sample of subjects with hemiparesis and to a control group that was matched by gender and age. A two-way analysis of variance was used to verify the intra-rater reliability. It was calculated using the differences between the averages of the measures obtained during single, double or triple trials. The intra-class correlation coefficient (ICC) was utilized and data plotted using the Bland-Altman method. Associations were analyzed using Pearson's correlation coefficient. In general, the intra-rater analysis did not show significant differences between the measures for the single, double or triple trials. Excellent ICC values were observed, and there were no significant associations with anthropometric parameters for the hemiparesis and control subjects. FR showed good reliability for patients with and without hemiparesis and the test measurements were not significantly associated with the anthropometric characteristics of the subjects.

  20. Chemical composition analysis and product consistency tests to support enhanced Hanford waste glass models. Results for the third set of high alumina outer layer matrix glasses

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States)


    In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for 14 simulated high level waste glasses fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions. The measured chemical composition data are reported and compared with the targeted values for each component for each glass. All of the measured sums of oxides for the study glasses fell within the interval of 96.9 to 100.8 wt %, indicating recovery of all components. Comparisons of the targeted and measured chemical compositions showed that the measured values for the glasses met the targeted concentrations within 10% for those components present at more than 5 wt %. The PCT results were normalized to both the targeted and measured compositions of the study glasses. Several of the glasses exhibited increases in normalized concentrations (NCi) after the canister centerline cooled (CCC) heat treatment. Five of the glasses, after the CCC heat treatment, had NCB values that exceeded that of the Environmental Assessment (EA) benchmark glass. These results can be combined with additional characterization, including X-ray diffraction, to determine the cause of the higher release rates.

  1. Self-consistent kinetic simulations of lower hybrid drift instability resulting in electron current driven by fusion products in tokamak plasmas

    Cook, J W S; Dendy, R O


    We present particle-in-cell (PIC) simulations of minority energetic protons in deuterium plasmas, which demonstrate a collective instability responsible for emission near the lower hybrid frequency and its harmonics. The simulations capture the lower hybrid drift instability in a regime relevant to tokamak fusion plasmas, and show further that the excited electromagnetic fields collectively and collisionlessly couple free energy from the protons to directed electron motion. This results in an asymmetric tail antiparallel to the magnetic field. We focus on obliquely propagating modes under conditions approximating the outer mid-plane edge in a large tokamak, through which there pass confined centrally born fusion products on banana orbits that have large radial excursions. A fully self-consistent electromagnetic relativistic PIC code representing all vector field quantities and particle velocities in three dimensions as functions of a single spatial dimension is used to model this situation, by evolving the in...

  2. Quantitative CT assessment in chronic obstructive pulmonary disease patients: Comparison of the patients with and without consistent clinical symptoms and pulmonary function results

    Nam, Boda; Hwang, Jung Hwa [Dept. of Radiology, Soonchunhyang University Hospital, Seoul (Korea, Republic of); Lee, Young Mok [Bangbae GF Allergy Clinic, Seoul (Korea, Republic of); Park, Jai Soung [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of); Jou, Sung Shick [Dept. of Radiology, Soonchunhyang University Cheonan Hospital, Cheonan (Korea, Republic of); Kim, Young Bae [Dept. of Preventive Medicine, Soonchunhyang University College of Medicine, Cheonan (Korea, Republic of)


    We compared the clinical and quantitative CT measurement parameters between chronic obstructive pulmonary disease (COPD) patients with and without consistent clinical symptoms and pulmonary function results. This study included 60 patients having a clinical diagnosis of COPD, who underwent chest CT scan and pulmonary function tests. These 60 patients were classified into typical and atypical groups, which were further sub-classified into 4 groups, based on their dyspnea score and the result of pulmonary function tests [typical 1: mild dyspnea and pulmonary function impairment (PFI); typical 2: severe dyspnea and PFI; atypical 1: mild dyspnea and severe PFI; atypical 2: severe dyspnea and mild PFI]. Quantitative measurements of the CT data for emphysema, bronchial wall thickness and air-trapping were performed using software analysis. Comparative statistical analysis was performed between the groups. The CT emphysema index correlated well with the results of the pulmonary functional test (typical 1 vs. atypical 1, p = 0.032), and the bronchial wall area ratio correlated with the dyspnea score (typical 1 vs. atypical 2, p = 0.033). CT air-trapping index also correlated with the results of the pulmonary function test (typical 1 vs. atypical 1, p = 0.012) and dyspnea score (typical 1 vs. atypical 2, p = 0.000), and was found to be the most significant parameter between the typical and atypical groups. Quantitative CT measurements for emphysema and airways correlated well with the dyspnea score and pulmonary function results in patients with COPD. Air-trapping was the most significant parameter between the typical vs. atypical group of COPD patients.

  3. Are Bibliographic Management Software Search Interfaces Reliable?: A Comparison between Search Results Obtained Using Database Interfaces and the EndNote Online Search Function

    Fitzgibbons, Megan; Meert, Deborah


    The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…

  4. Are Bibliographic Management Software Search Interfaces Reliable?: A Comparison between Search Results Obtained Using Database Interfaces and the EndNote Online Search Function

    Fitzgibbons, Megan; Meert, Deborah


    The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…

  5. Development of quartz c-axis crossed/single girdles under simple-pure shear deformation: Results of visco-plastic self-consistent modeling

    Nie, Guanjun; Shan, Yehua


    Quartz c-axis fabrics are widely used to determine the shear plane in ductile shear zones, based upon an assumption that the shear plane is perpendicular to both the central segment of quartz c-axis crossed girdle and single girdle. In this paper the development of quartz c-axis fabric under simple-pure shear deformation is simulated using the visco-plastic self-consistent (VPSC) model so as to re-examine this assumption. In the case of no or weak dynamic recrystallization, the simulated crossed girdles have a central segment perpendicular or nearly perpendicular to the maximum principal finite strain direction (X) and the XY finite strain plane, and at a variable angle relative to the imposed kinematic framework that is dependent on the modeled flow vorticity and finite strain. These crossed girdles have a symmetrical skeleton with respect to the finite strain axes, regardless of the bulk strain and the kinematic vorticity, and rotate in a way similar to the shear sense with increasing bulk strain ratio. The larger the vorticity number the more asymmetrical their legs tend to be. In the case of strong dynamic recrystallization and large bulk strain, under simple shear the crossed girdle switches into single girdles, sub-perpendicular to the shear plane, by losing the weak legs. The numerical results in our models do not confirm the above-mentioned assumption.

  6. Evaluation of a laboratory system intended for use in physicians' offices. I. Reliability of results produced by trained laboratory technologists.

    Belsey, R; Goitein, R K; Baer, D M


    The accuracy and precision of the Kodak DT-60 tabletop chemistry analyzer were evaluated in the clinical laboratory at the Portland (Ore) Veterans Administration Medical Center, and its operational throughout and cost were estimated. All DT-60 tests that were studied exhibited clinically acceptable precision and, except for the glucose method, accuracy. The accuracy of the glucose method was indeterminate with the available data. Throughput under field conditions was found to be less than half of the manufacturer's claim. The estimated supply cost per test could vary from $1.20 to $5.49 per test, depending on the test type and the number of assays expected to be performed daily. The instrument seems to be accurate, precise, and generally reliable when operated by professional medical technologists.

  7. Consistência interna e fatorial do Inventário Multifatorial de Coping para Adolescentes Reliability and confirmatory factorial analysis of the Multifactor Coping for Adolescents Inventory with Brazilian students

    Marcos Alencar Abaide Balbinotti


    Full Text Available Coping é um construto multidimensional relativo às formas como as pessoas lidam com situações estressantes. Diversas pesquisas assinalam a importância dessas "respostas de enfrentamento". Este estudo visa verificar os índices de consistência interna e fatorial confirmatórios do Inventário Multifatorial de Coping para Adolescentes (IMCA-43. Assim, a coleta de dados foi realizada mediante aplicações coletivas, em sala de aula, em uma amostra de 285 estudantes do ensino fundamental e médio, de ambos os sexos e com idades variando de 13 a 18 anos. Os resultados dos índices alfa de Cronbach (0,71 a 0,89 foram satisfatórios. A adequação aos modelos tridimensional (x2/gl = 2,85; GFI = 0,757; AGFI = 0,724; RMSEA = 0,081 tetradimensional (x2/gl = 2,44; GFI = 0,724; AGFI = 0,695; RMSEA = 0,071 e pentadimensional (x2/gl = 2,32; GFI = 0,750; AGFI = 0,723; RMSEA = 0,068 é pouco recomendada. Os resultados indicam serem necessárias pesquisas continuadas a fim de melhorar certas qualidades métricas deste instrumento.Coping is a multidimensional concept concerning how people face and deal with stressful situations. Researches have been showing the importance of these forms of behavior responses. This study aimed to measure the internal consistency of the Multifactorial Inventory of Coping for Adolescents (IMCA-43 and evaluate the fit of the model through confirmatory factorial analysis. A sample of 285 students of Intermediate and High School, of both sexes, with ages ranging from 13 to 18 years old was used. The data was colected colectivelly in their classrooms. The results of the Cronbach alfa (0,71 to 0,89 were satisfactory. The index of adequacy for the tri-dimensional model (x2/gl = 2,85; GFI = 0,757; AGFI = 0,724; RMSEA = 0,081, tetra-dimensional (x2/gl = 2,44; GFI = 0,724; AGFI = 0,695; RMSEA = 0,071 and penta-dimensional (x2/gl = 2,32; GFI = 0,750; AGFI = 0,723; RMSEA = 0,068 are non-recommended for the fit of the model. The results

  8. Does a web-based feedback training program result in improved reliability in clinicians' ratings of the Global Assessment of Functioning (GAF) Scale?

    Støre-Valen, Jakob; Ryum, Truls; Pedersen, Geir A F; Pripp, Are H; Jose, Paul E; Karterud, Sigmund


    The Global Assessment of Functioning (GAF) Scale is used in routine clinical practice and research to estimate symptom and functional severity and longitudinal change. Concerns about poor interrater reliability have been raised, and the present study evaluated the effect of a Web-based GAF training program designed to improve interrater reliability in routine clinical practice. Clinicians rated up to 20 vignettes online, and received deviation scores as immediate feedback (i.e., own scores compared with expert raters) after each rating. Growth curves of absolute SD scores across the vignettes were modeled. A linear mixed effects model, using the clinician's deviation scores from expert raters as the dependent variable, indicated an improvement in reliability during training. Moderation by content of scale (symptoms; functioning), scale range (average; extreme), previous experience with GAF rating, profession, and postgraduate training were assessed. Training reduced deviation scores for inexperienced GAF raters, for individuals in clinical professions other than nursing and medicine, and for individuals with no postgraduate specialization. In addition, training was most beneficial for cases with average severity of symptoms compared with cases with extreme severity. The results support the use of Web-based training with feedback routines as a means to improve the reliability of GAF ratings performed by clinicians in mental health practice. These results especially pertain to clinicians in mental health practice who do not have a masters or doctoral degree.

  9. Evaluation of a laboratory system intended for use in physicians' offices. II. Reliability of results produced by health care workers without formal or professional laboratory training.

    Belsey, R; Vandenbark, M; Goitein, R K; Baer, D M


    The Kodak DT-60 tabletop chemistry analyzer was evaluated with standardized protocols to determine the system's precision and accuracy when operated by four volunteers (a secretary, a licensed practical nurse, and two family medicine residents) in a simulated office laboratory. The variability of the results was found to be significantly greater than the variability of results produced by medical technologists who analyzed the same samples during the same study period with another DT-60 placed in the hospital laboratory. The source(s) of increased variance needs to be identified so the system can be modified or new control procedures can be developed to ensure the reliability of results used in patient care. Prospective purchasers, manufacturers, and patients need this kind of objective information about the reliability of results produced by systems intended for use in physicians' office laboratories.

  10. The gamma-ray and neutrino sky: a consistent picture of Fermi-LAT, H.E.S.S., Milagro, and IceCube results

    Gaggero, Daniele; Marinelli, Antonio; Urbano, Alfredo; Valli, Mauro


    In this Letter we propose a novel interpretation of the anomalous TeV gamma-ray diffuse emission observed by Milagro in the inner Galactic plane consistent with the signal reported by H.E.S.S. in the Galactic ridge; remarkably, our picture also accounts for a relevant portion of the neutrino flux measured by IceCube. Our scenario is based on a recently proposed phenomenological model characterized by radially-dependent cosmic-ray (CR) transport properties. Designed to reproduce both Fermi-LAT gamma-ray data and local CR observables, this model offers for the first time a self-consistent picture of both the GeV and the TeV sky.

  11. Software reliability models for critical applications

    Pham, H.; Pham, M.


    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  12. Software reliability models for critical applications

    Pham, H.; Pham, M.


    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  13. Examining the quality of the 'Healthy Eating and Physical Activity in Schools' (HEPS) quality checklist: German results on usability and reliability.

    Dadaczynski, Kevin; Boye, Jutta


    The aim of the present study was to examine the usability and reliability of the HEPS quality checklist (Healthy Eating and Physical Activity in Schools), an instrument developed to assess the quality of school-based programmes on healthy eating and physical activity. With regard to usability, health promotion experts (n = 15) were asked to apply the HEPS quality checklist and to fill out a questionnaire about its comprehensibility and usability. To examine inter-rater reliability (IRR) a criteria-based selection of German school programmes on healthy eating and physical activity (n = 14) was randomly allocated to two programme pools and assessed independently by the authors. Results of the pilot testing revealed a high overall satisfaction with the HEPS quality checklist and a high willingness to use it or to recommend it to others. Furthermore, the checklist was perceived to be comprehensive and clearly structured. The assessment results of programme pool 1 revealed unsatisfactory Cohen's Kappa coefficients (IRR) and moderate intra-class correlations (ICC). After the HEPS manual guide had been amended with regard to its anchoring, the results of programme pool 2 showed substantial improvements with regard to IRR and ICC. In summary, the adapted HEPS quality checklist is a usable and reliable instrument for the quality assessment of school-based programmes on healthy eating and physical activity. The findings suggest that the HEPS checklist should be applied by two sufficiently trained raters.

  14. Job-related affective well-being scale (Jaws: evidences of factor validity and reliability / Escala de bem-estar afetivo no trabalho (Jaws: evidências de validade fatorial e consistência interna

    Valdiney Veloso Gouveia


    Full Text Available This study aimed at adapting a measure of job-related affective well-being for the Brazilian milieu. Specifically, it was proposed to know evidences of factor validity and reliability of the Job-Related Affective Well-Being Scale (JAWS, assessing if its scores are influenced by participants' gender and age. The participants were 298 individuals employed in small or middle shopping malls in the city of João Pessoa, PB; most of them were female (76.8%, with a mean age of 26 years old (SD = 6.87. A main component analysis (with promax rotation was performed, revealing two components that jointly accounted for 48.1% of the total variance. They were named as positive affect (α = .94; 14 items and negative affect (α = .87; 13 items. A general factor of affective well-being was also identified (α = .95; 27 items. Participants' scores on these factors were not influenced by their gender or age. These findings are discussed based on literature that describes the psychometric parameters of the JAWS as well as the correlation of affects with demographic variables.

  15. Comet dust as a mixture of aggregates and solid particles: model consistent with ground-based and space-mission results

    Kolokolova, L


    The most successful model of comet dust presents comet particles as aggregates of submicron grains. It qualitatively explains the spectral and angular change in the comet brightness and polarization and is consistent with the thermal infrared data and composition of the comet dust obtained {\\it in situ} for comet 1P/Halley. However, it experiences some difficulties in providing a quantitative fit to the observational data. Here we present a model that considers comet dust as a mixture of aggregates and compact particles. The model is based on the Giotto and Stardust mission findings that both aggregates (made mainly of organics, silicates, and carbon) and solid silicate particles are present in the comet dust. We simulate aggregates as {\\bf Ballistic Cluster-Cluster Aggregates (BCCA)} and compact particles as polydisperse spheroids with some distribution of the aspect ratio. The particles follow a power-law size distribution with the power -3 that is close to the one obtained for comet dust {\\it in situ}, at ...

  16. The Validity and Reliability of the Mobbing Scale (MS)

    Yaman, Erkan


    The aim of this research is to develop the Mobbing Scale and examine its validity and reliability. The sample of the study consisted of 515 persons from Sakarya and Bursa. In this study, construct validity, internal consistency, test-retest reliability, and item analysis of the scale were examined. As a result of factor analysis for construct…

  17. Reliability considerations of NDT by probability of detection (POD). Determination using ultrasound phased array. Results from a project in frame of the German nuclear safety research program

    Kurz, Jochen H. [Fraunhofer-Institut fuer Zerstoerungsfreie Pruefverfahren (IZEP), Saarbruecken (Germany); Dugan, Sandra; Juengert, Anne [Stuttgart Univ. (Germany). Materialpruefungsanstalt (MPA)


    Reliable assessment procedures are an important aspect of maintenance concepts. Non-destructive testing (NDT) methods are an essential part of a variety of maintenance plans. Fracture mechanical assessments require knowledge of flaw dimensions, loads and material parameters. NDT methods are able to acquire information on all of these areas. However, it has to be considered that the level of detail information depends on the case investigated and therefore on the applicable methods. Reliability aspects of NDT methods are of importance if quantitative information is required. Different design concepts e.g. the damage tolerance approach in aerospace already include reliability criteria of NDT methods applied in maintenance plans. NDT is also an essential part during construction and maintenance of nuclear power plants. In Germany, type and extent of inspection are specified in Safety Standards of the Nuclear Safety Standards Commission (KTA). Only certified inspections are allowed in the nuclear industry. The qualification of NDT is carried out in form of performance demonstrations of the inspection teams and the equipment, witnessed by an authorized inspector. The results of these tests are mainly statements regarding the detection capabilities of certain artificial flaws. In other countries, e.g. the U.S., additional blind tests on test blocks with hidden and unknown flaws may be required, in which a certain percentage of these flaws has to be detected. The knowledge of the probability of detection (POD) curves of specific flaws in specific testing conditions is often not present. This paper shows the results of a research project designed for POD determination of ultrasound phased array inspections of real and artificial cracks. The continuative objective of this project was to generate quantitative POD results. The distribution of the crack sizes of the specimens and the inspection planning is discussed, and results of the ultrasound inspections are presented. In

  18. Rapid, reliable geodetic data analysis for hazard response: Results from the Advanced Rapid Imaging and Analysis (ARIA) project

    Owen, S. E.; Simons, M.; Hua, H.; Yun, S.; Cruz, J.; Webb, F.; Rosen, P. A.; Fielding, E. J.; Moore, A. W.; Polet, J.; Liu, Z.; Agram, P. S.; Lundgren, P.


    ARIA is a joint JPL/Caltech coordinated project to automate InSAR and GPS imaging capabilities for scientific understanding, hazard response, and societal benefit. Geodetic imaging's unique ability to capture surface deformation in high spatial and temporal resolution allows us to resolve the fault geometry and distribution of slip associated with earthquakes in high spatial & temporal detail. In certain cases, it can be complementary to seismic data, providing constraints on location, geometry, or magnitude that is difficult to determine with seismic data alone. In addition, remote sensing with SAR provides change detection and damage assessment capabilities for earthquakes, floods and other disasters that can image even at night or through clouds. We have built an end-to-end prototype geodetic imaging data system that forms the foundation for a hazard response and science analysis capability that integrates InSAR, high-rate GPS, seismology, and modeling to deliver monitoring, science, and situational awareness products. This prototype incorporates state-of-the-art InSAR and GPS analysis algorithms from technologists and scientists. The products have been designed and a feasibility study conducted in collaboration with USGS scientists in the earthquake and volcano science programs. We will present results that show the capabilities of this data system in terms of latency, data processing capacity, quality of automated products, and feasibility of use for analysis of large SAR and GPS data sets and for earthquake response activities.

  19. Self-Consistent-Field Method and τ-Functional Method on Group Manifold in Soliton Theory: a Review and New Results

    Seiya Nishiyama


    Full Text Available The maximally-decoupled method has been considered as a theory to apply an basic idea of an integrability condition to certain multiple parametrized symmetries. The method is regarded as a mathematical tool to describe a symmetry of a collective submanifold in which a canonicity condition makes the collective variables to be an orthogonal coordinate-system. For this aim we adopt a concept of curvature unfamiliar in the conventional time-dependent (TD self-consistent field (SCF theory. Our basic idea lies in the introduction of a sort of Lagrange manner familiar to fluid dynamics to describe a collective coordinate-system. This manner enables us to take a one-form which is linearly composed of a TD SCF Hamiltonian and infinitesimal generators induced by collective variable differentials of a canonical transformation on a group. The integrability condition of the system read the curvature C = 0. Our method is constructed manifesting itself the structure of the group under consideration. To go beyond the maximaly-decoupled method, we have aimed to construct an SCF theory, i.e., υ (external parameter-dependent Hartree-Fock (HF theory. Toward such an ultimate goal, the υ-HF theory has been reconstructed on an affine Kac-Moody algebra along the soliton theory, using infinite-dimensional fermion. An infinite-dimensional fermion operator is introduced through a Laurent expansion of finite-dimensional fermion operators with respect to degrees of freedom of the fermions related to a υ-dependent potential with a Υ-periodicity. A bilinear equation for the υ-HF theory has been transcribed onto the corresponding τ-function using the regular representation for the group and the Schur-polynomials. The υ-HF SCF theory on an infinite-dimensional Fock space F∞ leads to a dynamics on an infinite-dimensional Grassmannian Gr∞ and may describe more precisely such a dynamics on the group manifold. A finite-dimensional Grassmannian is identified with a Gr

  20. Results of 45 arthroscopic Bankart procedures: Does the ISIS remain a reliable prognostic assessment after 5 years?

    Boughebri, Omar; Maqdes, Ali; Moraiti, Constantina; Dib, Choukry; Leclère, Franck Marie; Valenti, Philippe


    The Instability Severity Index Score (ISIS) includes preoperative clinical and radiological risk factors to select patients who can benefit from an arthroscopic Bankart procedure with a low rate of recurrence. Patients who underwent an arthroscopic Bankart for anterior shoulder instability with an ISIS lower than or equal to four were assessed after a minimum of 5-year follow-up. Forty-five shoulders were assessed at a mean of 79 months (range 60-118 months). Average age was 29.4 years (range 17-58 years) at the time of surgery. Postoperative functions were assessed by the Walch and Duplay and the Rowe scores for 26 patients; an adapted telephonic interview was performed for the 19 remaining patients who could not be reassessed clinically. A failure was defined by the recurrence of an anterior dislocation or subluxation. Patients were asked whether they were finally very satisfied, satisfied or unhappy. The mean Walch and Duplay score at last follow-up was 84.3 (range 35-100). The final result for these patients was excellent in 14 patients (53.8 %), good in seven cases (26.9 %), poor in three patients (11.5 %) and bad in two patients (7.7 %). The mean Rowe score was 82.6 (range 35-100). Thirty-nine patients (86.7 %) were subjectively very satisfied or satisfied, and six (13.3 %) were unhappy. Four patients (8.9 %) had a recurrence of frank dislocation with a mean delay of 34 months (range 12-72 months). Three of them had a Hill-Sachs lesion preoperatively. Two patients had a preoperative ISIS at 4 points and two patients at 3 points. The selection based on the ISIS allows a low rate of failure after an average term of 5 years. Lowering the limit for indication to 3 points allows to avoid the association between two major risk factors for recurrence, which are valued at 2 points. The existence of a Hill-Sachs lesion is a stronger indicator for the outcome of instability repair. Level IV, Retrospective Case Series, Treatment Study.

  1. MEMS reliability

    Hartzell, Allyson L; Shea, Herbert R


    This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.

  2. Reliability of Periapical Radiographs and Orthopantomograms in Detection of Tooth Root Protrusion in the Maxillary Sinus: Correlation Results with Cone Beam Computed Tomography

    Bassam A. Hassan


    Full Text Available Objectives: The purpose of the present study was to investigate the reliability of both periapical radiographs and orthopantomograms for exact detection of tooth root protrusion in the maxillary sinus by correlating the results with cone beam computed tomography.Material and methods: A database of 1400 patients scanned with cone beam computed tomography (CBCT was searched for matching periapical (PA radiographs and orthopantogram (OPG images of maxillary premolars and molars. Matching OPG images datasets of 101 patients with 628 teeth and PA radiographs datasets of 93 patients with 359 teeth were identified. Four observers assessed the relationship between the apex of tooth root and the maxillary sinus per tooth on PA radiographs, OPG and CBCT images using the following classification: root tip is in the sinus (class 1, root tip is against the sinus wall (class 2 and root tip is not in the sinus (class 3.Results: Overall correlation between OPG and CBCT images scores was 50%, 26% and 56.1% for class 1, class 2 and class 3, respectively (Cohen’s kappa [weighted] = 0.1. Overall correlation between PA radiographs and CBCT images was 75.8%, 15.8% and 56.9% for class 1, class 2 and class 3, respectively (Cohen’s kappa [weighted] = 0.24. In both the OPG images and the PA radiographs datasets, class 1 correlation was most frequently observed with the first and second molars.Conclusions: The results demonstrated that both periapical radiographs and orthopantomograms are not reliable in determination of exact relationship between the apex of tooth root and the maxillary sinus floor. Periapical radiography is slightly more reliable than orthopantomography in determining this relationship.

  3. 一种雷达质量评估结果可信度分析方法%Analytic method for result reliability of radar quality assessment

    王涛; 欧阳林涛; 毕增军; 侯晓东


    For achieving the reliable result of radar equipment quality assessment, setting about the inefficiency of traditional quality assessment and the reality of quality assessment of radar equipment system, this paper proposes a measurement method for the reliability of assessment results, based on the combination of weighted DS evidence theory with experts’evaluation, and performs its testing by taking some examples. Test results show that the proposed method is of feasibility and effectiveness for the assessment results of radar equipment quality, thus providing a powerful technical support for the robustness assessment of the radar equipment system.%为了得到雷达装备质量评估的可靠结果,从传统质量评估的不足和雷达装备系统质量评估的实际出发,提出了基于权重的DS证据理论与专家评定相结合的方法来对评估结果的可信度进行测度,并通过实例进行了验证。验证结果表明了该方法在雷达装备质量评估结果中的可行性和有效性,为雷达装备系统稳健评估提供了一种有力的技术支撑。

  4. Reliable knowledge discovery

    Dai, Honghua; Smirnov, Evgueni


    Reliable Knowledge Discovery focuses on theory, methods, and techniques for RKDD, a new sub-field of KDD. It studies the theory and methods to assure the reliability and trustworthiness of discovered knowledge and to maintain the stability and consistency of knowledge discovery processes. RKDD has a broad spectrum of applications, especially in critical domains like medicine, finance, and military. Reliable Knowledge Discovery also presents methods and techniques for designing robust knowledge-discovery processes. Approaches to assessing the reliability of the discovered knowledge are introduc

  5. Software reliability

    Bendell, A


    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  6. Reliability of periapical radiographs and orthopantomograms in detection of tooth root protrusion in the maxillary sinus: correlation results with cone beam computed tomography.

    Hassan, Bassam A


    The purpose of the present study was to investigate the reliability of both periapical radiographs and orthopantomograms for exact detection of tooth root protrusion in the maxillary sinus by correlating the results with cone beam computed tomography. A database of 1400 patients scanned with cone beam computed tomography (CBCT) was searched for matching periapical (PA) radiographs and orthopantogram (OPG) images of maxillary premolars and molars. Matching OPG images datasets of 101 patients with 628 teeth and PA radiographs datasets of 93 patients with 359 teeth were identified. Four observers assessed the relationship between the apex of tooth root and the maxillary sinus per tooth on PA radiographs, OPG and CBCT images using the following classification: root tip is in the sinus (class 1), root tip is against the sinus wall (class 2) and root tip is not in the sinus (class 3). Overall correlation between OPG and CBCT images scores was 50%, 26% and 56.1% for class 1, class 2 and class 3, respectively (Cohen's kappa [weighted] = 0.1). Overall correlation between PA radiographs and CBCT images was 75.8%, 15.8% and 56.9% for class 1, class 2 and class 3, respectively (Cohen's kappa [weighted]  = 0.24). In both the OPG images and the PA radiographs datasets, class 1 correlation was most frequently observed with the first and second molars. The results demonstrated that both periapical radiographs and orthopantomograms are not reliable in determination of exact relationship between the apex of tooth root and the maxillary sinus floor. Periapical radiography is slightly more reliable than orthopantomography in determining this relationship.

  7. a Reliability Evaluation System of Association Rules

    Chen, Jiangping; Feng, Wanshu; Luo, Minghai


    In mining association rules, the evaluation of the rules is a highly important work because it directly affects the usability and applicability of the output results of mining. In this paper, the concept of reliability was imported into the association rule evaluation. The reliability of association rules was defined as the accordance degree that reflects the rules of the mining data set. Such degree contains three levels of measurement, namely, accuracy, completeness, and consistency of rules. To show its effectiveness, the "accuracy-completeness-consistency" reliability evaluation system was applied to two extremely different data sets, namely, a basket simulation data set and a multi-source lightning data fusion. Results show that the reliability evaluation system works well in both simulation data set and the actual problem. The three-dimensional reliability evaluation can effectively detect the useless rules to be screened out and add the missing rules thereby improving the reliability of mining results. Furthermore, the proposed reliability evaluation system is applicable to many research fields; using the system in the analysis can facilitate obtainment of more accurate, complete, and consistent association rules.

  8. Power electronics reliability analysis.

    Smith, Mark A.; Atcitty, Stanley


    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  9. Network Consistent Data Association.

    Chakraborty, Anirban; Das, Abir; Roy-Chowdhury, Amit K


    Existing data association techniques mostly focus on matching pairs of data-point sets and then repeating this process along space-time to achieve long term correspondences. However, in many problems such as person re-identification, a set of data-points may be observed at multiple spatio-temporal locations and/or by multiple agents in a network and simply combining the local pairwise association results between sets of data-points often leads to inconsistencies over the global space-time horizons. In this paper, we propose a Novel Network Consistent Data Association (NCDA) framework formulated as an optimization problem that not only maintains consistency in association results across the network, but also improves the pairwise data association accuracies. The proposed NCDA can be solved as a binary integer program leading to a globally optimal solution and is capable of handling the challenging data-association scenario where the number of data-points varies across different sets of instances in the network. We also present an online implementation of NCDA method that can dynamically associate new observations to already observed data-points in an iterative fashion, while maintaining network consistency. We have tested both the batch and the online NCDA in two application areas-person re-identification and spatio-temporal cell tracking and observed consistent and highly accurate data association results in all the cases.

  10. A Revisit to Probability - Possibility Consistency Principles

    Mamoni Dhar


    Full Text Available In this article, our main intention is to highlight the fact that the probable links between probability and possibility which were established by different authors at different point of time on the basis of some well known consistency principles cannot provide the desired result. That is why the paper discussed some prominent works for transformations between probability and possibility and finally aimed to suggest a new principle because none of the existing principles because none of them found the unique transformation. The new consistency principle which is suggested hereby would in turn replace all others that exist in the literature references by providing a reliable estimate of consistency between the two.Furthermore some properties of entropy of fuzzy numbers are also presented in this article.

  11. Experimental Investigation Related To Some Predicted Results Of Reliable High Frequency Radio Communication Links Between Benghazi-Libya And Cairo-Egypt.

    Mohamed Yousef Ahmed Abou-Hussein


    Full Text Available In this study, the central radio propagation laboratory (CRPL method of ionospheric prediction of the National Bureau of Standards (NBS in U.S.A was used in practical calculations of the optimal working frequencies for reliable high frequency (HF radio communication links between Benghazi-Libya and Cairo-Egypt. The results were drawn in the form of curves by using the computer. The computer was used to measure the received signal level variation of frequencies 11.980 MHz, 11.785 MHz which were transmitted with a power of 250 KW, 100 KW respectively from the Egypt Arabic Republic Broadcasting station in Cairo city, directed to the North Africa and South Europe regions. The measurements were taken during daytime's for winter (December, January& February and summer (June, July & August seasons.

  12. Reporting consistently on CSR

    Thomsen, Christa; Nielsen, Anne Ellerup


    of a case study showing that companies use different and not necessarily consistent strategies for reporting on CSR. Finally, the implications for managerial practice are discussed. The chapter concludes by highlighting the value and awareness of the discourse and the discourse types adopted......This chapter first outlines theory and literature on CSR and Stakeholder Relations focusing on the different perspectives and the contextual and dynamic character of the CSR concept. CSR reporting challenges are discussed and a model of analysis is proposed. Next, our paper presents the results...... in the reporting material. By implementing consistent discourse strategies that interact according to a well-defined pattern or order, it is possible to communicate a strong social commitment on the one hand, and to take into consideration the expectations of the shareholders and the other stakeholders...

  13. Exon Array Analysis using re-defined probe sets results in reliable identification of alternatively spliced genes in non-small cell lung cancer

    Gröne Jörn


    Full Text Available Abstract Background Treatment of non-small cell lung cancer with novel targeted therapies is a major unmet clinical need. Alternative splicing is a mechanism which generates diverse protein products and is of functional relevance in cancer. Results In this study, a genome-wide analysis of the alteration of splicing patterns between lung cancer and normal lung tissue was performed. We generated an exon array data set derived from matched pairs of lung cancer and normal lung tissue including both the adenocarcinoma and the squamous cell carcinoma subtypes. An enhanced workflow was developed to reliably detect differential splicing in an exon array data set. In total, 330 genes were found to be differentially spliced in non-small cell lung cancer compared to normal lung tissue. Microarray findings were validated with independent laboratory methods for CLSTN1, FN1, KIAA1217, MYO18A, NCOR2, NUMB, SLK, SYNE2, TPM1, (in total, 10 events and ADD3, which was analysed in depth. We achieved a high validation rate of 69%. Evidence was found that the activity of FOX2, the splicing factor shown to cause cancer-specific splicing patterns in breast and ovarian cancer, is not altered at the transcript level in several cancer types including lung cancer. Conclusions This study demonstrates how alternatively spliced genes can reliably be identified in a cancer data set. Our findings underline that key processes of cancer progression in NSCLC are affected by alternative splicing, which can be exploited in the search for novel targeted therapies.

  14. Brief Report: Interrater Reliability of Clinical Diagnosis and DSM-IV Criteria for Autistic Disorder: Results of the DSM-IV Autism Field Trial.

    Klin, Ami; Lang, Jason; Cicchetti, Domenic V.; Volkmar, Fred R.


    This study examined the inter-rater reliability of clinician-assigned diagnosis of autism using or not using the criteria specified in the Diagnostic and Statistical Manual IV (DSM-IV). For experienced raters there was little difference in reliability in the two conditions. However, a clinically significant improvement in diagnostic reliability…

  15. Reliability Engineering

    Lazzaroni, Massimo


    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  16. How stable are quantitative sensory testing measurements over time? Report on 10-week reliability and agreement of results in healthy volunteers

    Nothnagel H


    Full Text Available Helen Nothnagel,1,2,* Christian Puta,1,3,* Thomas Lehmann,4 Philipp Baumbach,5 Martha B Menard,6,7 Brunhild Gabriel,1 Holger H W Gabriel,1 Thomas Weiss,8 Frauke Musial2 1Department of Sports Medicine and Health Promotion, Friedrich Schiller University, Jena, Germany; 2Department of Community Medicine, National Research Center in Complementary and Alternative Medicine, UiT, The Arctic University of Norway, Tromsø, Norway; 3Center for Interdisciplinary Prevention of Diseases Related to Professional Activities, 4Department of Medical Statistics, Computer Sciences and Documentation, Friedrich Schiller University, 5Department of Anesthesiology and Intensive Care Medicine, University Hospital Jena, Germany; 6Crocker Institute, Kiawah Island, SC, 7School of Integrative Medicine and Health Sciences, Saybrook University, Oakland, CA, USA; 8Department of Biological and Clinical Psychology, Friedrich Schiller University, Jena, Germany *These authors contributed equally to this work Background: Quantitative sensory testing (QST is a diagnostic tool for the assessment of the somatosensory system. To establish QST as an outcome measure for clinical trials, the question of how similar the measurements are over time is crucial. Therefore, long-term reliability and limits of agreement of the standardized QST protocol of the German Research Network on Neuropathic Pain were tested. Methods: QST on the lower back and hand dorsum (dominant hand were assessed twice in 22 healthy volunteers (10 males and 12 females; mean age: 46.6±13.0 years, with sessions separated by 10.0±2.9 weeks. All measurements were performed by one investigator. To investigate long-term reliability and agreement of QST, differences between the two measurements, correlation coefficients, intraclass correlation coefficients (ICCs, Bland–Altman plots (limits of agreement, and standard error of measurement were used. Results: Most parameters of the QST were reliable over 10 weeks in

  17. Load Control System Reliability

    Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)


    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  18. The Rucio Consistency Service

    Serfon, Cedric; The ATLAS collaboration


    One of the biggest challenge with Large scale data management system is to ensure the consistency between the global file catalog and what is physically on all storage elements. To tackle this issue, the Rucio software which is used by the ATLAS Distributed Data Management system has been extended to automatically handle lost or unregistered files (aka Dark Data). This system automatically detects these inconsistencies and take actions like recovery or deletion of unneeded files in a central manner. In this talk, we will present this system, explain the internals and give some results.

  19. Reliability of plantar pressure platforms.

    Hafer, Jocelyn F; Lenhoff, Mark W; Song, Jinsup; Jordan, Joanne M; Hannan, Marian T; Hillstrom, Howard J


    Plantar pressure measurement is common practice in many research and clinical protocols. While the accuracy of some plantar pressure measuring devices and methods for ensuring consistency in data collection on plantar pressure measuring devices have been reported, the reliability of different devices when testing the same individuals is not known. This study calculated intra-mat, intra-manufacturer, and inter-manufacturer reliability of plantar pressure parameters as well as the number of plantar pressure trials needed to reach a stable estimate of the mean for an individual. Twenty-two healthy adults completed ten walking trials across each of two Novel emed-x(®) and two Tekscan MatScan(®) plantar pressure measuring devices in a single visit. Intraclass correlation (ICC) was used to describe the agreement between values measured by different devices. All intra-platform reliability correlations were greater than 0.70. All inter-emed-x(®) reliability correlations were greater than 0.70. Inter-MatScan(®) reliability correlations were greater than 0.70 in 31 and 52 of 56 parameters when looking at a 10-trial average and a 5-trial average, respectively. Inter-manufacturer reliability including all four devices was greater than 0.70 for 52 and 56 of 56 parameters when looking at a 10-trial average and a 5-trial average, respectively. All parameters reached a value within 90% of an unbiased estimate of the mean within five trials. Overall, reliability results are encouraging for investigators and clinicians who may have plantar pressure data sets that include data collected on different devices.

  20. Reliability based design optimization: Formulations and methodologies

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  1. Coronary bypass grafting using crossclamp fibrillation does not result in reliable reperfusion of the myocardium when the crossclamp is intermittently released: a prospective cohort study

    Wallis John


    Full Text Available Abstract Background Cross-clamp fibrillation is a well established method of performing coronary grafting, but its clinical effect on the myocardium is unknown. We sought to measure these effects clinically using the Khuri Intramyocardial pH monitor. Methods 50 episodes of cross-clamping were recorded in 16 patients who underwent CABG with crossclamp-fibrillation. An Intramyocardial pH probe measured the level of acidosis in the anterior and posterior myocardium in real-time. The pH at the start and end of each period of cross-clamping was recorded. Results It became very apparent that the pH of some patients recovered quickly while others entirely failed to recover. Thus the patients were split into 2 groups according to whether the pH recovered to above 6.8 after the first crossclamp-release (N = 8 in each group. Initial pH was 7.133 (range 6.974–7.239. After the first period of crossclamping the pH dropped to 6.381 (range 6.034–6.684. The pH in recoverers prior to the second XC application was 6.990(range 6.808–7.222 compared to only 6.455 (range 6.200–6.737 in patient's whose myocardium did not recover (P Conclusion Crossclamp fibrillation does not result in reliable reperfusion of the myocardium between periods of crossclamping.

  2. The OMERACT Psoriatic Arthritis Magnetic Resonance Imaging Score (PsAMRIS) is reliable and sensitive to change: results from an OMERACT workshop

    Bøyesen, Pernille; McQueen, Fiona M; Gandjbakhch, Frédérique;


    The aim of this multireader exercise was to assess the reliability and sensitivity to change of the psoriatic arthritis magnetic resonance imaging score (PsAMRIS) in PsA patients followed for 1 year.......The aim of this multireader exercise was to assess the reliability and sensitivity to change of the psoriatic arthritis magnetic resonance imaging score (PsAMRIS) in PsA patients followed for 1 year....

  3. Retrocausation, Consistency, and the Bilking Paradox

    Dobyns, York H.


    Retrocausation seems to admit of time paradoxes in which events prevent themselves from occurring and thereby create a physical instance of the liar's paradox, an event which occurs iff it does not occur. The specific version in which a retrocausal event is used to trigger an intervention which prevents its own future cause is called the bilking paradox (the event is bilked of its cause). The analysis of Echeverria, Klinkhammer, and Thorne (EKT) suggests time paradoxes cannot arise even in the presence of retrocausation. Any self-contradictory event sequence will be replaced in reality by a closely related but noncontradictory sequence. The EKT analysis implies that attempts to create bilking must instead produce logically consistent sequences wherein the bilked event arises from alternative causes. Bilking a retrocausal information channel of limited reliability usually results only in failures of signaling. An exception applies when the bilking is conducted in response only to some of the signal values that can be carried on the channel. Theoretical analysis based on EKT predicts that, since some of the channel outcomes are not bilked, the channel is capable of transmitting data with its normal reliability, and the paradox-avoidance effects will instead suppress the outcomes that would lead to forbidden (bilked) transmissions. A recent parapsychological experiment by Bem displays a retrocausal information channel of sufficient reliability to test this theoretical model of physical reality's response to retrocausal effects. A modified version with partial bilking would provide a direct test of the generality of the EKT mechanism.

  4. Análise da consistência interna e fatorial confirmatório do IMPRAFE-126 com praticantes de atividades físicas gaúchos Reliability and confirmatory factorial analysis of the IMPRAFE-126 with gauchos' practitioners of physical activities

    Marcos Alencar Abaide Balbinotti


    Full Text Available Neste estudo, a motivação é entendida à luz da teoria da Autodeterminação. O objetivo deste estudo é verificar os índices de consistência interna e fatorial confirmatório do IMPRAFE-126. Utilizou-se uma amostra de 1.377 sujeitos, gaúchos, de ambos os sexos e com idades variando de 13 a 83 anos. Os resultados dos índices alfa de Cronbach (superiores a 0,89 foram satisfatórios. A adequação do modelo em seis dimensões foi testada e a validade confirmatória foi assumida para a amostra geral (x2/gl=2,520; GFI=0,859; AGFI=0,854; RMSEA=0,065 e para os sexos masculino (x2/gl=3,905; GFI=0,885; AGFI=0,881; RMSEA=0,066 e feminino (x2/gl=4,337; GFI=0,840; AGFI=0,831; RMSEA=0,068. Esses resultados indicam que o IMPRAFE-126 é um instrumento promissor e que pode ser oportunamente utilizado por psicólogos do esporte ou educadores físicos, aqueles particularmente interessados em avaliar os níveis de motivação de atletas ou praticantes de atividade física e esporte em geral. Entretanto, outros estudos de validade, fidedignidade e de normas devem ser conduzidos a fim de poder-se publicá-los em um futuro próximo.In this study, motivation is understood in the context of Self-determination theory. This study aims to verify the index of reliability and confirmatory factorial validity of the IMPRAFE-126. A sample of 1.377 gauchos' practitioners of physical activities, both sexes with ages between 13 and 83 years old, was used. The results of Cronbach's alpha index (.89 to .94 had been satisfactory. The adequacy to the six-dimension model was tested and the construct confirmatory validity for the general sample was assumed (x2/gl=2,520; GFI=0,859; AGFI=0,854; RMSEA=0,065, as so as for both sexes (masculine: x2/gl=3,905; GFI=0,885; AGFI=0,881; RMSEA=0,066; feminine: x2/gl=4,337; GFI=0,840; AGFI=0,831; RMSEA=0,068. These results indicate the IMPRAFE-126 is a promising tool that can be used by sports psychologists or personal trainers particularly

  5. Is Magnification Consistent?

    Graney, Christopher M.


    Is the phenomenon of magnification by a converging lens inconsistent and therefore unreliable? Can a lens magnify one part of an object but not another? Physics teachers and even students familiar with basic optics would answer "no," yet many answer "yes." Numerous telescope users believe that magnification is not a reliable phenomenon in that it…

  6. Consistent model driven architecture

    Niepostyn, Stanisław J.


    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  7. Development and Reliability of Items Measuring the Nonmedical Use of Prescription Drugs for the Youth Risk Behavior Survey: Results Froman Initial Pilot Test

    Howard, Melissa M.; Weiler, Robert M.; Haddox, J. David


    Background: The purpose of this study was to develop and test the reliability of self-report survey items designed to monitor the nonmedical use of prescription drugs among adolescents. Methods: Eighteen nonmedical prescription drug items designed to be congruent with the substance abuse items in the US Centers for Disease Control and Prevention's…

  8. A diagnostic test for apraxia in stroke patients: internal consistency and diagnostic value.

    Heugten, C.M. van; Dekker, J.; Deelman, B.G.; Stehmann-Saris, F.C.; Kinebanian, A.


    The internal consistency and the diagnostic value of a test for apraxia in patients having had a stroke are presented. Results indicate that the items of the test form a strong and consistent scale: Cronbach's alpha as well as the results of a Mokken scale analysis present good reliability and good

  9. A Diagnostic Test for Apraxia in Stroke Patients : Internal consistency and diagnostic value

    van Heugten, C.M.; Dekker, J.; Deelman, B.G.; Stehmann-Saris, J.C; Kinebanian, A


    The internal consistency and the diagnostic value of a test for apraxia in patients having had a stroke are presented. Results indicate that the items of the test form a strong and consistent scale: Cronbach's alpha as well as the results of a Mokken scale analysis present good reliability and good

  10. Microelectronics Reliability


    convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. This report was cleared for public release...testing for reliability prediction of devices exhibiting multiple failure mechanisms. Also presented was an integrated accelerating and measuring ...13  Table 2  T, V, F and matrix versus  measured  FIT

  11. No consistent bimetric gravity?

    Deser, S; Waldron, A


    We discuss the prospects for a consistent, nonlinear, partially massless (PM), gauge symmetry of bimetric gravity (BMG). Just as for single metric massive gravity, ultimate consistency of both BMG and the putative PM BMG theory relies crucially on this gauge symmetry. We argue, however, that it does not exist.

  12. Reliability of chemical analyses of water samples

    Beardon, R.


    Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.

  13. Reliability of chemical analyses of water samples

    Beardon, R.


    Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.

  14. Reliable Electronic Equipment

    N. A. Nayak


    Full Text Available The reliability aspect of electronic equipment's is discussed. To obtain optimum results, close cooperation between the components engineer, the design engineer and the production engineer is suggested.

  15. Consistency of trace norm minimization

    Bach, Francis


    Regularization by the sum of singular values, also referred to as the trace norm, is a popular technique for estimating low rank rectangular matrices. In this paper, we extend some of the consistency results of the Lasso to provide necessary and sufficient conditions for rank consistency of trace norm minimization with the square loss. We also provide an adaptive version that is rank consistent even when the necessary condition for the non adaptive version is not fulfilled.

  16. Prizes for consistency

    Hiscock, S.


    The importance of consistency in coal quality has become of increasing significance recently, with the current trend towards using coal from a range of sources. A significant development has been the swing in responsibilities for coal quality. The increasing demand for consistency in quality has led to a re-examination of where in the trade and transport chain the quality should be assessed and where further upgrading of inspection and preparation facilities are required. Changes are in progress throughout the whole coal transport chain which will improve consistency of delivered coal quality. These include installation of beneficiation plant at coal mines, export terminals, and on the premises of end users. It is suggested that one of the keys to success for the coal industry will be the ability to provide coal of a consistent quality.

  17. Consistent sets contradict

    Kent, A


    In the consistent histories formulation of quantum theory, the probabilistic predictions and retrodictions made from observed data depend on the choice of a consistent set. We show that this freedom allows the formalism to retrodict several contradictory propositions which correspond to orthogonal commuting projections and which all have probability one. We also show that the formalism makes contradictory probability one predictions when applied to generalised time-symmetric quantum mechanics.

  18. The comparison of wavelet- and Fourier-based electromyographic indices of back muscle fatigue during dynamic contractions: validity and reliability results.

    da Silva, R A; Larivière, C; Arsenault, A B; Nadeau, S; Plamondon, A


    The purpose of this study was to compare the electromyographic (EMG) fatigue indices computed from short-time Fourier transform (STFT) and wavelet transform (WAV), by analyzing their criterion validity and test-retest reliability. The effect of averaging spectral estimates within and between repeated contractions (cycles) on EMG fatigue indices was also demonstrated. Thirty-one healthy subjects performed trunk flexion-extension cycles until exhaustion on a Biodex dynamometer. The load was determined theoretically as twice the L5-S1 moment produced by the trunk mass. To assess reliability, 10 subjects performed the same experimental protocol after a two-week interval. EMG signals were recorded bilaterally with 12 pairs of electrodes placed on the back muscles (at L4, L3, L1 and T10 levels), as well as on the gluteus maximus and biceps femoris. The endurance time and perceived muscle fatigue (Borg CR-10 scale) were used as fatigue criteria. EMG signals were processed using STFT and WAV to extract global (e.g, median frequency and instantaneous median frequency, respectively) or local (e.g., intensity contained in 8 frequency bands) information from the power spectrum. The slope values of these variables over time, obtained from regression analyses, were retained as EMG fatigue indices. EMG fatigue indices (STFT vs. WAV) were not significantly different within each muscle, had a variable association (Pearson's r range.: 0.06 to 0.68) with our fatigue criteria, and showed comparable reliability (Intra-class correlation range: 0.00 to 0.88), although they varied between muscles. The effect of averaging, within and between cycles, contributed to the strong association between EMG fatigue indices computed from STFT and WAV. As for EMG spectral indices of muscle fatigue, the conclusion is that both transforms carry essentially the same information.

  19. [Reliability of the results of the ultrasonic hemodynamic recording (Doppler effect) in the diagnosis of cerebral ishemic ischemia of carotid origin].

    Perrin, G; Goutelle, A; Pierluca, P; Chacornac, R; Allegre, G E


    The Doppler ultrasound diagnosis of carotid artery stenosis (asymetrical systolic and diastolic flows; elevated resistance index: ratio of flow pulse amplitude to systolic and diastolic values; flow reversal in the ophtalmic artery) is compared, in 52 patients, to the clinical, angiographic (40 patients) an surgical findings and to the peroperative measure of intra-arterial pressure and flow (30 patients). Its reliability is proved as a guide for angiographic exploration and for postoperative watching, but it is restricted to great vessels (cervical carotid artery) and is unable to detect ulcerated plate without stenosis.

  20. A Comprehensive Consistency Test Method Based on Improved Grey Relational Analysis for Simulation Results%基于改进灰色关联分析的仿真数据综合一致性检验方法

    胡玉伟; 马萍; 杨明; 王子才


    To improve the quality of consistency test,a comprehensive consistency test method based on improved grey relational analysis theory is proposed.An improved grey relational grade model is constructed in terms of the shape and distances between data series.Considering that the work process of actual system usually consists of multiple stages and work status in every stage has different impact on the final results,the comprehensive relational grade is defined and the weights of indicators are determined through analytic hierarchy process method.Finally,the dynamic consistency of the whole simulation data could be assessed by weighted comprehensive relational grade.The proposed method not only can make full use of finite data,but also can consider the actual work situation of system.Moreover,it has no demand for the volumes and distribution of data and is convenient to be realized on computer.By the consistency test for the rail-gun discharge current,which is an essential indicator,the feasibility and effectiveness of the proposed method has been verified.%为全面有效检验仿真数据的一致性,提出一种基于改进灰色关联分析的仿真数据综合一致性检验方法.以灰色关联分析思想为基础,综合考虑数据序列间的形状和距离,构建了改进的灰色关联度模型.针对实际系统工作过程包含多个阶段,各阶段在整个过程中的重要程度不同的问题,利用层次分析法确定其权重,定义综合关联度,用以评价整个仿真数据的一致性.该一致性检验方法既利用了有限的客观数据,又结合了对实际系统工作状况的考虑,而且对数据容量和分布特性没有要求,便于在计算机上实现.通过对电磁轨道炮的主要性能指标放电电流的一致性检验,验证了该方法的可行性和有效性.

  1. Reliability prediction techniques

    Whittaker, B.; Worthington, B.; Lord, J.F.; Pinkard, D.


    The paper demonstrates the feasibility of applying reliability assessment techniques to mining equipment. A number of techniques are identified and described and examples of their use in assessing mining equipment are given. These techniques include: reliability prediction; failure analysis; design audit; maintainability; availability and the life cycle costing. Specific conclusions regarding the usefulness of each technique are outlined. The choice of techniques depends upon both the type of equipment being assessed and its stage of development, with numerical prediction best suited for electronic equipment and fault analysis and design audit suited to mechanical equipment. Reliability assessments involve much detailed and time consuming work but it has been demonstrated that the resulting reliability improvements lead to savings in service costs which more than offset the cost of the evaluation.

  2. A Magnetic Consistency Relation

    Jain, Rajeev Kumar


    If cosmic magnetic fields are indeed produced during inflation, they are likely to be correlated with the scalar metric perturbations that are responsible for the Cosmic Microwave Background anisotropies and Large Scale Structure. Within an archetypical model of inflationary magnetogenesis, we show that there exists a new simple consistency relation for the non-Gaussian cross correlation function of the scalar metric perturbation with two powers of the magnetic field in the squeezed limit where the momentum of the metric perturbation vanishes. We emphasize that such a consistency relation turns out to be extremely useful to test some recent calculations in the literature. Apart from primordial non-Gaussianity induced by the curvature perturbations, such a cross correlation might provide a new observational probe of inflation and can in principle reveal the primordial nature of cosmic magnetic fields.

  3. Consistency in Distributed Systems

    Kemme, Bettina; Ramalingam, Ganesan; Schiper, André; Shapiro, Marc; Vaswani, Kapil


    International audience; In distributed systems, there exists a fundamental trade-off between data consistency, availability, and the ability to tolerate failures. This trade-off has significant implications on the design of the entire distributed computing infrastructure such as storage systems, compilers and runtimes, application development frameworks and programming languages. Unfortunately, it also has significant, and poorly understood, implications for the designers and developers of en...

  4. Geometrically Consistent Mesh Modification

    Bonito, A.


    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  5. Consistent wind Facilitates Vection

    Masaki Ogawa


    Full Text Available We examined whether a consistent haptic cue suggesting forward self-motion facilitated vection. We used a fan with no blades (Dyson, AM01 providing a wind of constant strength and direction (wind speed was 6.37 m/s to the subjects' faces with the visual stimuli visible through the fan. We used an optic flow of expansion or contraction created by positioning 16,000 dots at random inside a simulated cube (length 20 m, and moving the observer's viewpoint to simulate forward or backward self-motion of 16 m/s. we tested three conditions for fan operation, which were normal operation, normal operation with the fan reversed (ie, no wind, and no operation (no wind and no sound. Vection was facilitated by the wind (shorter latency, longer duration and larger magnitude values with the expansion stimuli. The fan noise did not facilitate vection. The wind neither facilitated nor inhibited vection with the contraction stimuli, perhaps because a headwind is not consistent with backward self-motion. We speculate that the consistency between multi modalities is a key factor in facilitating vection.

  6. Hybrid reliability model for fatigue reliability analysis of steel bridges

    曹珊珊; 雷俊卿


    A kind of hybrid reliability model is presented to solve the fatigue reliability problems of steel bridges. The cumulative damage model is one kind of the models used in fatigue reliability analysis. The parameter characteristics of the model can be described as probabilistic and interval. The two-stage hybrid reliability model is given with a theoretical foundation and a solving algorithm to solve the hybrid reliability problems. The theoretical foundation is established by the consistency relationships of interval reliability model and probability reliability model with normally distributed variables in theory. The solving process is combined with the definition of interval reliability index and the probabilistic algorithm. With the consideration of the parameter characteristics of theS−N curve, the cumulative damage model with hybrid variables is given based on the standards from different countries. Lastly, a case of steel structure in the Neville Island Bridge is analyzed to verify the applicability of the hybrid reliability model in fatigue reliability analysis based on the AASHTO.

  7. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    Kozine, Igor; Christensen, P.; Winther-Jensen, M.


    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical...... foundation of the reliability and availability analyses and of sections devoted to the development of the WTB reliability models as well as adescription of the features of the database and software developed. The project comprises analysis of WTBs NM 600/44, 600/48, 750/44 and 750/48, all of which have...

  8. Consistency in the World Wide Web

    Thomsen, Jakob Grauenkjær

    Tim Berners-Lee envisioned that computers will behave as agents of humans on the World Wide Web, where they will retrieve, extract, and interact with information from the World Wide Web. A step towards this vision is to make computers capable of extracting this information in a reliable...... and consistent way. In this dissertation we study steps towards this vision by showing techniques for the specication, the verication and the evaluation of the consistency of information in the World Wide Web. We show how to detect certain classes of errors in a specication of information, and we show how...... the World Wide Web, in order to help perform consistent evaluations of web extraction techniques. These contributions are steps towards having computers reliable and consistently extract information from the World Wide Web, which in turn are steps towards achieving Tim Berners-Lee's vision. ii...

  9. Consistency argued students of fluid

    Viyanti; Cari; Suparmi; Winarti; Slamet Budiarti, Indah; Handika, Jeffry; Widyastuti, Fatma


    Problem solving for physics concepts through consistency arguments can improve thinking skills of students and it is an important thing in science. The study aims to assess the consistency of the material Fluid student argmentation. The population of this study are College students PGRI Madiun, UIN Sunan Kalijaga Yogyakarta and Lampung University. Samples using cluster random sampling, 145 samples obtained by the number of students. The study used a descriptive survey method. Data obtained through multiple-choice test and interview reasoned. Problem fluid modified from [9] and [1]. The results of the study gained an average consistency argmentation for the right consistency, consistency is wrong, and inconsistent respectively 4.85%; 29.93%; and 65.23%. Data from the study have an impact on the lack of understanding of the fluid material which is ideally in full consistency argued affect the expansion of understanding of the concept. The results of the study as a reference in making improvements in future studies is to obtain a positive change in the consistency of argumentations.

  10. Nuclear weapon reliability evaluation methodology

    Wright, D.L. [Sandia National Labs., Albuquerque, NM (United States)


    This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.

  11. Grid reliability

    Saiz, P; Rocha, R; Andreeva, J


    We are offering a system to track the efficiency of different components of the GRID. We can study the performance of both the WMS and the data transfers At the moment, we have set different parts of the system for ALICE, ATLAS, CMS and LHCb. None of the components that we have developed are VO specific, therefore it would be very easy to deploy them for any other VO. Our main goal is basically to improve the reliability of the GRID. The main idea is to discover as soon as possible the different problems that have happened, and inform the responsible. Since we study the jobs and transfers issued by real users, we see the same problems that users see. As a matter of fact, we see even more problems than the end user does, since we are also interested in following up the errors that GRID components can overcome by themselves (like for instance, in case of a job failure, resubmitting the job to a different site). This kind of information is very useful to site and VO administrators. They can find out the efficien...

  12. Consistency of Random Survival Forests.

    Ishwaran, Hemant; Kogalur, Udaya B


    We prove uniform consistency of Random Survival Forests (RSF), a newly introduced forest ensemble learner for analysis of right-censored survival data. Consistency is proven under general splitting rules, bootstrapping, and random selection of variables-that is, under true implementation of the methodology. Under this setting we show that the forest ensemble survival function converges uniformly to the true population survival function. To prove this result we make one key assumption regarding the feature space: we assume that all variables are factors. Doing so ensures that the feature space has finite cardinality and enables us to exploit counting process theory and the uniform consistency of the Kaplan-Meier survival function.

  13. Reliability of nucleic acid amplification methods for detection of Chlamydia trachomatis in urine: results of the first international collaborative quality control study among 96 laboratories

    R.P.A.J. Verkooyen (Roel); G.T. Noordhoek; P.E. Klapper; J. Reid; J. Schirm; G.M. Cleator; M. Ieven; G. Hoddevik


    textabstractThe first European Quality Control Concerted Action study was organized to assess the ability of laboratories to detect Chlamydia trachomatis in a panel of urine samples by nucleic acid amplification tests (NATs). The panel consisted of lyophilized urine samples, includ

  14. Infanticide and moral consistency.

    McMahan, Jeff


    The aim of this essay is to show that there are no easy options for those who are disturbed by the suggestion that infanticide may on occasion be morally permissible. The belief that infanticide is always wrong is doubtfully compatible with a range of widely shared moral beliefs that underlie various commonly accepted practices. Any set of beliefs about the morality of abortion, infanticide and the killing of animals that is internally consistent and even minimally credible will therefore unavoidably contain some beliefs that are counterintuitive.

  15. Consistency Analysis of Network Traffic Repositories

    Lastdrager, Elmer; Pras, Aiko


    Traffic repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffic that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for var

  16. Consistency of Network Traffic Repositories: An Overview

    Lastdrager, E.; Pras, A.


    Traffc repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffc that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for vario

  17. The rating reliability calculator

    Solomon David J


    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  18. When is holography consistent?

    McInnes, Brett, E-mail: [National University of Singapore (Singapore); Ong, Yen Chin, E-mail: [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)


    Holographic duality relates two radically different kinds of theory: one with gravity, one without. The very existence of such an equivalence imposes strong consistency conditions which are, in the nature of the case, hard to satisfy. Recently a particularly deep condition of this kind, relating the minimum of a probe brane action to a gravitational bulk action (in a Euclidean formulation), has been recognized; and the question arises as to the circumstances under which it, and its Lorentzian counterpart, is satisfied. We discuss the fact that there are physically interesting situations in which one or both versions might, in principle, not be satisfied. These arise in two distinct circumstances: first, when the bulk is not an Einstein manifold and, second, in the presence of angular momentum. Focusing on the application of holography to the quark–gluon plasma (of the various forms arising in the early Universe and in heavy-ion collisions), we find that these potential violations never actually occur. This suggests that the consistency condition is a “law of physics” expressing a particular aspect of holography.

  19. Reply to "Comment on 'A Self-Consistent Model of the Interacting Ring Current Ions and Electromagnetic Ion Cyclotron Waves, Initial Results: Waves and Precipitation Fluxes' and 'Self-Consistent Model of the Magnetospheric Ring Current and Propagating Electromagnetic Ion Cyclotron Waves: Waves in Multi-Ion Magnetosphere' by Khazanov et al. et al."

    Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.; Kozyra, J. W.


    It is well-known that the effects of electromagnetic ion cyclotron (EMIC) waves on ring current (RC) ion and radiation belt (RB) electron dynamics strongly depend on such particle/wave characteristics as the phase-space distribution function, frequency, wavenormal angle, wave energy, and the form of wave spectral energy density. The consequence is that accurate modeling of EMIC waves and RC particles requires robust inclusion of the interdependent dynamics of wave growth/damping, wave propagation, and[ particles. Such a self-consistent model is being progressively developed by Khazanov et al. [2002, 2006, 2007]. This model is based on a system of coupled kinetic equations for the RC and EMIC wave power spectral density along with the ray tracing equations. Thome and Home [2007] (hereafter referred to as TH2007) call the Khazanov et al. [2002, 2006] results into question in their Comment. The points in contention can be summarized as follows. TH2007 claim that: (1) "the important damping of waves by thermal heavy ions is completely ignored", and Landau damping during resonant interaction with thermal electrons is not included in our model; (2) EMIC wave damping due to RC O + is not included in our simulation; (3) non-linear processes limiting EMIC wave amplitude are not included in our model; (4) growth of the background fluctuations to a physically significantamplitude"must occur during a single transit of the unstable region" with subsequent damping below bi-ion latitudes,and consequently"the bounce averaged wave kinetic equation employed in the code contains a physically erroneous 'assumption". Our reply will address each of these points as well as other criticisms mentioned in the Comment. TH2007 are focused on two of our papers that are separated by four years. Significant progress in the self-consistent treatment of the RC-EMIC wave system has been achieved during those years. The paper by Khazanov et al. [2006] presents the latest version of our model, and in

  20. Factorial validity and reliability of the General Health Questionnaire (GHQ-12 in the Brazilian physician population Questionário de Saúde Geral (QSG-12 na população médica Brasileira: evidências de validade fatorial e consistência interna

    Valdiney V. Gouveia


    Full Text Available The 12-item General Health Questionnaire (GHQ-12 is a widely used screening instrument. One- and two-factor structures have been identified in some countries. In Brazil, the best factor structure is still unclear. This study aimed at knowing its factorial validity and reliability, and testing the one-factor and two-factor models. The participants were 7,512 Brazilian physicians. They answered the GHQ-12 and demographic questions. Unrotated (one-factor and rotated (two-factor structures of the GHQ-12 were extracted by principal component analysis. Confirmatory factor analyses (ML were used to compare the one- and two-factor solutions. The two-factor model fitted the data better than the one-factor one. Those two factors were depression and social dysfunction, and they showed themselves to be directly correlated to one another. They also showed adequate reliability coefficients. The two-factor model is remarkably adequate, showing better fit indices, although it is acceptable to admit a common factor, which could be defined as psychological distress.O Questionário de Saúde Geral de 12 Itens (QSG-12 é um instrumento de triagem amplamente usado. As estruturas com um e dois fatores têm sido observadas em alguns países. No Brasil não é ainda clara a melhor estrutura fatorial. Este estudo objetivou conhecer evidências de sua validade fatorial e consistência interna. Os participantes deste estudo foram 7.512 médicos brasileiros, que responderam o QSG-12 e perguntas demográficas. Foram extraídas estruturas fatoriais rotadas (unifatorial e não-rotadas (bifatorial por meio de análise de componentes principais. Realizaram-se análises fatoriais confirmatórias (ML para comparar as soluções uni e bi-fatorial. O modelo com dois fatores se ajustou melhor aos dados do que o unifatorial. Os dois fatores foram depressão e disfunção social, sendo diretamente correlacionados entre si; ambos apresentaram coeficientes de confiabilidade aceit

  1. Consistent quantum measurements

    Griffiths, Robert B.


    In response to recent criticisms by Okon and Sudarsky, various aspects of the consistent histories (CH) resolution of the quantum measurement problem(s) are discussed using a simple Stern-Gerlach device, and compared with the alternative approaches to the measurement problem provided by spontaneous localization (GRW), Bohmian mechanics, many worlds, and standard (textbook) quantum mechanics. Among these CH is unique in solving the second measurement problem: inferring from the measurement outcome a property of the measured system at a time before the measurement took place, as is done routinely by experimental physicists. The main respect in which CH differs from other quantum interpretations is in allowing multiple stochastic descriptions of a given measurement situation, from which one (or more) can be selected on the basis of its utility. This requires abandoning a principle (termed unicity), central to classical physics, that at any instant of time there is only a single correct description of the world.

  2. Delimiting Coefficient a from Internal Consistency and Unidimensionality

    Sijtsma, Klaas


    I discuss the contribution by Davenport, Davison, Liou, & Love (2015) in which they relate reliability represented by coefficient a to formal definitions of internal consistency and unidimensionality, both proposed by Cronbach (1951). I argue that coefficient a is a lower bound to reliability and that concepts of internal consistency and…

  3. Gravitation, Causality, and Quantum Consistency

    Hertzberg, Mark P


    We examine the role of consistency with causality and quantum mechanics in determining the properties of gravitation. We begin by constructing two different classes of interacting theories of massless spin 2 particles -- gravitons. One involves coupling the graviton with the lowest number of derivatives to matter, the other involves coupling the graviton with higher derivatives to matter, making use of the linearized Riemann tensor. The first class requires an infinite tower of terms for consistency, which is known to lead uniquely to general relativity. The second class only requires a finite number of terms for consistency, which appears as a new class of theories of massless spin 2. We recap the causal consistency of general relativity and show how this fails in the second class for the special case of coupling to photons, exploiting related calculations in the literature. In an upcoming publication [1] this result is generalized to a much broader set of theories. Then, as a causal modification of general ...

  4. When Is Holography Consistent?

    McInnes, Brett


    Holographic duality relates two radically different kinds of theory: one with gravity, one without. The very existence of such an equivalence imposes strong consistency conditions which are, in the nature of the case, hard to satisfy. Recently a particularly deep condition of this kind, relating the minimum of a probe brane action to a gravitational bulk action (in a Euclidean formulation), has been recognised; and the question arises as to the circumstances under which it, and its Lorentzian counterpart, are satisfied. We discuss the fact that there are physically interesting situations in which one or both versions might, in principle, \\emph{not} be satisfied. These arise in two distinct circumstances: first, when the bulk is not an Einstein manifold, and, second, in the presence of angular momentum. Focusing on the application of holography to the quark-gluon plasma (of the various forms arising in the early Universe and in heavy-ion collisions), we find that these potential violations never actually occur...

  5. Synthesis of Reliable Telecommunication Networks

    Dusan Trstensky


    Full Text Available In many application, the network designer may to know to senthesise a reliable telecommunication network. Assume that a network, denoted Gm,e has the number of nodes n and the number of edges e, and the operational probability of each edge is known. The system reliability of the network is defined to be the reliability that every pair of nodes can communicate with each other. A network synthesis problem considered in this paper is to find a network G*n,e, that maximises system reliability over the class of all networks for the classes of networks Gn,n-1, Gn,m and Gn,n+1 respectively. In addition an upper bound of maximum reliability for the networks with n-node and e-edge (e>n+2 is derived in terms of node. Computational experiments for the reliability upper are also presented. the results show, that the proposed reliability upper bound is effective.

  6. Inter-laboratory consistency and variability in the buccal micronucleus cytome assay depends on biomarker scored and laboratory experience: results from the HUMNxl international inter-laboratory scoring exercise.

    Bolognesi, Claudia; Knasmueller, Siegfried; Nersesyan, Armen; Roggieri, Paola; Ceppi, Marcello; Bruzzone, Marco; Blaszczyk, Ewa; Mielzynska-Svach, Danuta; Milic, Mirta; Bonassi, Stefano; Benedetti, Danieli; Da Silva, Juliana; Toledo, Raphael; Salvadori, Daisy Maria Fávero; Groot de Restrepo, Helena; Filipic, Metka; Hercog, Klara; Aktas, Ayça; Burgaz, Sema; Kundi, Michael; Grummt, Tamara; Thomas, Philip; Hor, Maryam; Escudero-Fung, Maria; Holland, Nina; Fenech, Michael


    The buccal micronucleus cytome (BMNcyt) assay in uncultured exfoliated epithelial cells from oral mucosa is widely applied in biomonitoring human exposures to genotoxic agents and is also proposed as a suitable test for prescreening and follow-up of precancerous oral lesions. The main limitation of the assay is the large variability observed in the baseline values of micronuclei (MNi) and other nuclear anomalies mainly related to different scoring criteria. The aim of this international collaborative study, involving laboratories with different level of experience, was to evaluate the inter- and intra-laboratory variations in the BMNcyt parameters, using recently implemented guidelines, in scoring cells from the same pooled samples obtained from healthy subjects (control group) and from cancer patients undergoing radiotherapy (treated group). The results indicate that all laboratories correctly discriminated samples from the two groups by a significant increase of micronucleus (MN) and nuclear bud (NBUD) frequencies and differentiated binucleated (BN) cells, associated with the exposure to ionizing radiation. The experience of the laboratories was shown to play an important role in the identification of the different cell types and nuclear anomalies. MN frequency in differentiated mononucleated (MONO) and BN cells showed the greatest consistency among the laboratories and low variability was also detected in the frequencies of MONO and BN cells. A larger variability was observed in classifying the different cell types, indicating the subjectivity in the interpretation of some of the scoring criteria while reproducibility of the results between scoring sessions was very good. An inter-laboratory calibration exercise is strongly recommended before starting studies with BMNcyt assay involving multiple research centers. © The Author 2016. Published by Oxford University Press on behalf of the UK Environmental Mutagen Society. All rights reserved. For permissions

  7. Ultra reliability at NASA

    Shapiro, Andrew A.


    Ultra reliable systems are critical to NASA particularly as consideration is being given to extended lunar missions and manned missions to Mars. NASA has formulated a program designed to improve the reliability of NASA systems. The long term goal for the NASA ultra reliability is to ultimately improve NASA systems by an order of magnitude. The approach outlined in this presentation involves the steps used in developing a strategic plan to achieve the long term objective of ultra reliability. Consideration is given to: complex systems, hardware (including aircraft, aerospace craft and launch vehicles), software, human interactions, long life missions, infrastructure development, and cross cutting technologies. Several NASA-wide workshops have been held, identifying issues for reliability improvement and providing mitigation strategies for these issues. In addition to representation from all of the NASA centers, experts from government (NASA and non-NASA), universities and industry participated. Highlights of a strategic plan, which is being developed using the results from these workshops, will be presented.

  8. Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network

    Dhaya, R.; Sadasivam, V.; Kanthavel, R.


    Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.

  9. Time consistent portfolio management

    Ekeland, Ivar; Pirvu, Traian A


    This paper considers the portfolio management problem of optimal investment, consumption and life insurance. We are concerned with time inconsistency of optimal strategies. Natural assumptions, like different discount rates for consumption and life insurance, or a time varying aggregation rate lead to time inconsistency. As a consequence, the optimal strategies are not implementable. We focus on hyperbolic discounting, which has received much attention lately, especially in the area of behavioural finance. Following [10], we consider the resulting problem as a leader-follower game between successive selves, each of whom can commit for an infinitesimally small amount of time. We then define policies as subgame perfect equilibrium strategies. Policies are characterized by an integral equation which is shown to have a solution. Although we work on CRRA preference paradigm, our results can be extended for more general preferences as long as the equations admit solutions. Numerical simulations reveal that for the ...

  10. Early Stage Software Reliability Estimation with Stochastic Reward Nets

    ZHAO Jing; LIU Hong-wei; CUI Gang; YANG Xiao-zong


    This paper presents software reliability modeling issues at the early stage of a software development for fault tolerant software management system. Based on Stochastic Reward Nets, an effective model of hierarchical view for a fault tolerant software management system is put forward, and an approach that consists of system transient performance analysis is adopted. A quantitative approach for software reliability analysis is given. The results show its usefulness for the design and evaluation of the early-stage software reliability modeling when failure data is not available.

  11. Continuous Reliability Enhancement for Wind (CREW) database :

    Hines, Valerie Ann-Peters; Ogilvie, Alistair B.; Bond, Cody R.


    To benchmark the current U.S. wind turbine fleet reliability performance and identify the major contributors to component-level failures and other downtime events, the Department of Energy funded the development of the Continuous Reliability Enhancement for Wind (CREW) database by Sandia National Laboratories. This report is the third annual Wind Plant Reliability Benchmark, to publically report on CREW findings for the wind industry. The CREW database uses both high resolution Supervisory Control and Data Acquisition (SCADA) data from operating plants and Strategic Power Systems ORAPWindª (Operational Reliability Analysis Program for Wind) data, which consist of downtime and reserve event records and daily summaries of various time categories for each turbine. Together, these data are used as inputs into CREWs reliability modeling. The results presented here include: the primary CREW Benchmark statistics (operational availability, utilization, capacity factor, mean time between events, and mean downtime); time accounting from an availability perspective; time accounting in terms of the combination of wind speed and generation levels; power curve analysis; and top system and component contributors to unavailability.

  12. Delta-Reliability

    Eugster, P.; Guerraoui, R.; Kouznetsov, P.


    This paper presents a new, non-binary measure of the reliability of broadcast algorithms, called Delta-Reliability. This measure quantifies the reliability of practical broadcast algorithms that, on the one hand, were devised with some form of reliability in mind, but, on the other hand, are not considered reliable according to the ``traditional'' notion of broadcast reliability [HT94]. Our specification of Delta-Reliability suggests a further step towards bridging the gap between theory and...

  13. Reliability computation from reliability block diagrams

    Chelson, P. O.; Eckstein, E. Y.


    Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.

  14. Response and Reliability Problems of Dynamic Systems

    Nielsen, Søren R. K.

    The present thesis consists of selected parts of the work performed by the author on stochastic dynamics and reliability theory of dynamically excited structures primarily during the period 1986-1996.......The present thesis consists of selected parts of the work performed by the author on stochastic dynamics and reliability theory of dynamically excited structures primarily during the period 1986-1996....

  15. [Selection of a statistical model for the evaluation of the reliability of the results of toxicological analyses. II. Selection of our statistical model for the evaluation].

    Antczak, K; Wilczyńska, U


    Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.

  16. The molecular characterization of a depurinated trial DNA sample can be a model to understand the reliability of the results in forensic genetics.

    Fattorini, Paolo; Previderè, Carlo; Sorçaburu-Cigliero, Solange; Marrubini, Giorgio; Alù, Milena; Barbaro, Anna M; Carnevali, Eugenia; Carracedo, Angel; Casarino, Lucia; Consoloni, Lara; Corato, Silvia; Domenici, Ranieri; Fabbri, Matteo; Giardina, Emiliano; Grignani, Pierangela; Baldassarra, Stefania Lonero; Moratti, Marco; Nicolin, Vanessa; Pelotti, Susi; Piccinini, Andrea; Pitacco, Paola; Plizza, Laura; Resta, Nicoletta; Ricci, Ugo; Robino, Carlo; Salvaderi, Luca; Scarnicci, Francesca; Schneider, Peter M; Seidita, Gregorio; Trizzino, Lucia; Turchi, Chiara; Turrina, Stefania; Vatta, Paolo; Vecchiotti, Carla; Verzeletti, Andrea; De Stefano, Francesco


    The role of DNA damage in PCR processivity/fidelity is a relevant topic in molecular investigation of aged/forensic samples. In order to reproduce one of the most common lesions occurring in postmortem tissues, a new protocol based on aqueous hydrolysis of the DNA was developed in vitro. Twenty-five forensic laboratories were then provided with 3.0 μg of a trial sample (TS) exhibiting, in mean, the loss of 1 base of 20, and a molecular weight below 300 bp. Each participating laboratory could freely choose any combination of methods, leading to the quantification and to the definition of the STR profile of the TS, through the documentation of each step of the analytical approaches selected. The results of the TS quantification by qPCR showed significant differences in the amount of DNA recorded by the participating laboratories using different commercial kits. These data show that only DNA quantification "relative" to the used kit (probe) is possible, being the "absolute" amount of DNA inversely related to the length of the target region (r(2) = 0.891). In addition, our results indicate that the absence of a shared stable and certified reference quantitative standard is also likely involved. STR profiling was carried out selecting five different commercial kits and amplifying the TS for a total number of 212 multiplex PCRs, thus representing an interesting overview of the different analytical protocols used by the participating laboratories. Nine laboratories decided to characterize the TS using a single kit, with a number of amplifications varying from 2 to 12, obtaining only partial STR profiles. Most of the participants determined partial or full profiles using a combination of two or more kits, and a number of amplifications varying from 2 to 27. The performance of each laboratory was described in terms of number of correctly characterized loci, dropped-out markers, unreliable genotypes, and incorrect results. The incidence of unreliable and incorrect

  17. Reliability design and assessment of a micro-probe using the results of a tensile test of a beryllium-copper alloy thin film

    Park, Jun-Hyub; Shin, Myung-Soo


    This paper describes the results of tensile tests for a beryllium-copper (BeCu) alloy thin film and the application of the results to the design of a probe. The copper alloy films were fabricated by electroplating. To obtain the tensile characteristics of the film, the dog-bone type specimen was fabricated by the etching method. The tensile tests were performed with the specimen using a test machine developed by the authors. The BeCu alloy has an elastic modulus of 119 GPa and the 0.2% offset yield and ultimate tensile strengths of 1078 MPa and 1108 MPa, respectively. The design and manufacture of a smaller probe require higher pad density and smaller pad-pitch chips. It should be effective in high-frequency testing. For the design of a new micro-probe, we investigated several design parameters that may cause problems, such as the contact force and life, using the tensile properties and the design of experiment method in conjunction with finite element analysis. The optimal dimensions of the probe were found using the response surface method. The probe with optimal dimensions was manufactured by a precision press process. It was verified that the manufactured probe satisfied the life, the contact force and the over drive through the compression tests and the life tests of the probes.

  18. Reliability engineering theory and practice

    Birolini, Alessandro


    This book shows how to build in, evaluate, and demonstrate reliability and availability of components, equipment, systems. It presents the state-of-theart of reliability engineering, both in theory and practice, and is based on the author's more than 30 years experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The structure of the book allows rapid access to practical results. This final edition extend and replace all previous editions. New are, in particular, a strategy to mitigate incomplete coverage, a comprehensive introduction to human reliability with design guidelines and new models, and a refinement of reliability allocation, design guidelines for maintainability, and concepts related to regenerative stochastic processes. The set of problems for homework has been extended. Methods & tools are given in a way that they can be tailored to cover different reliability requirement levels and be used for safety analysis. Because of the Appendice...

  19. Reliability of procedures used for scaling loudness

    Jesteadt, Walt; Joshi, Suyash Narendra


    (ME, MP, CLS; MP, ME, CLS; CLS, ME, MP; CLS, MP, ME), and the order was reversed on the second visit. This design made it possible to compare the reliability of estimates of the slope of the loudness function across procedures in the same listeners. The ME data were well fitted by an inflected...... exponential (INEX) function, but a modified power law was used to obtain slope estimates for both ME and MP. ME and CLS were more reliable than MP. CLS results were consistent across groups, but ME and MP results differed across groups in a way that suggested influence of experience with CLS. Although CLS...... results were the most reproducible, they do not provide direct information about the slope of the loudness function because the numbers assigned to CLS categories are arbitrary. This problem can be corrected by using data from the other procedures to assign numbers that are proportional to loudness...

  20. Testes de contato e HIV: avaliação comparativa quanto à confiabilidade dos resultados Patch tests and HIV: comparing reliability of results

    Sabrina de Stefani


    Full Text Available FUNDAMENTOS: A ocorrência da dermatite de contato alérgica em pacientes soropositivos para o HIV foi pouco estudada até o momento (apenas relatos de caso. Os testes de contato são considerados o exame complementar padrão para a investigação diagnóstica desse tipo de reação alérgica e não foram avaliados cientificamente nesse grupo de pacientes. OBJETIVO: Avaliar a aplicabilidade dos testes de contato em pacientes soropositivos para o HIV. MÉTODO: Estudo transversal, descritivo, com controles. Um grupo com 16 pacientes soropositivos para o HIV foi comparado a um grupo com 32 pacientes com sorologia desconhecida para o HIV com relação à positividade aos testes. Foi realizada análise estatística bivariada com nível de significância p BACKGROUND: Allergic contact dermatitis in HIV-positive patients has not been thoroughly studied (there are only case reports. Patch tests are the gold standard for diagnosis of this type of allergic reaction and have not been scientifically assessed in such patients. OBJECTIVE: To evaluate the applicability of patch tests in HIV-positive patients. METHODS: A cross-sectional, controlled and descriptive study. A group of 16 HIV-positive patients was compared to a group of 32 patients with unknown HIV sorology results. Bivariate statistical analysis was performed with significance level of p<0.05. RESULTS: Seven patients (43.75% in the HIV-positive group and 18 (56.25% in the unknown sorology group had patch positive tests. CONCLUSIONS: Regardless of immunodeficiency, the findings of this study suggest that specific immunologic memory and the ability to positively respond to tests may remain active. Therefore, this standard and important diagnostic method for allergic contact dermatitis is valid when applied to a group of HIV patients.

  1. Expert system aids reliability

    Johnson, A.T. [Tennessee Gas Pipeline, Houston, TX (United States)


    Quality and Reliability are key requirements in the energy transmission industry. Tennessee Gas Co. a division of El Paso Energy, has applied Gensym`s G2, object-oriented Expert System programming language as a standard tool for maintaining and improving quality and reliability in pipeline operation. Tennessee created a small team of gas controllers and engineers to develop a Proactive Controller`s Assistant (ProCA) that provides recommendations for operating the pipeline more efficiently, reliably and safely. The controller`s pipeline operating knowledge is recreated in G2 in the form of Rules and Procedures in ProCA. Two G2 programmers supporting the Gas Control Room add information to the ProCA knowledge base daily. The result is a dynamic, constantly improving system that not only supports the pipeline controllers in their operations, but also the measurement and communications departments` requests for special studies. The Proactive Controller`s Assistant development focus is in the following areas: Alarm Management; Pipeline Efficiency; Reliability; Fuel Efficiency; and Controller Development.

  2. Historical Evolution of Global and Regional Surface Air Temperature Simulated by FGOALS-s2 and FGOALS-g2:How Reliable Are the Model Results?

    ZHOU Tianjun; SONG Fengfei; CHEN Xiaolong


    In order to assess the performance of two versions of the IAP/LASG Flexible Global Ocean-Atmosphere-Land System (FGOALS) model,simulated changes in surface air temperature (SAT),from natural and anthropogenic forcings,were compared to observations for the period 1850-2005 at global,hemispheric,continental and regional scales.The global and hemispheric averages of SAT and their land and ocean components during 1850 2005 were well reproduced by FGOALS-g2,as evidenced by significant correlation coefficients and small RMSEs.The significant positive correlations were firstly determined by the warming trends,and secondly by interdecadal fluctuations.The abilities of the models to reproduce interdecadal SAT variations were demonstrated by both wavelet analysis and significant positive correlations for detrended data.The observed land-sea thermal contrast change was poorly simulated.The major weakness of FGOALS-s2 was an exaggerated warming response to anthropogenic forcing,with the simulation showing results that were far removed from observations prior to the 1950s.The observations featured warming trends (1906-2005)of 0.71,0.68 and 0.79℃ (100 yr) 1 for global,Northern and Southern Hemispheric averages,which were overestimated by FGOALS-s2 [1.42,1.52 and 1.13℃ (100 yr)-1] but underestimated by FGOALS-g2 [0.69,0.68 and 0.73℃ (100 yr)-1].The polar amplification of the warming trend was exaggerated in FGOALS-s2 but weakly reproduced in FGOALS-g2.The stronger response of FGOALS-s2 to anthropogenic forcing was caused by strong sea-ice albedo feedback and water vapor feedback.Examination of model results in 15 selected subcontinental-scale regions showed reasonable performance for FGOALS-g2 over most regions.However,the observed warming trends were overestimated by FGOALS-s2 in most regions.Over East Asia,the meridional gradient of the warming trend simulated by FGOALS-s2 (FGOALS-g2) was stronger (weaker)than observed.

  3. Quasiparticle self-consistent GW theory.

    van Schilfgaarde, M; Kotani, Takao; Faleev, S


    In past decades the scientific community has been looking for a reliable first-principles method to predict the electronic structure of solids with high accuracy. Here we present an approach which we call the quasiparticle self-consistent approximation. It is based on a kind of self-consistent perturbation theory, where the self-consistency is constructed to minimize the perturbation. We apply it to selections from different classes of materials, including alkali metals, semiconductors, wide band gap insulators, transition metals, transition metal oxides, magnetic insulators, and rare earth compounds. Apart from some mild exceptions, the properties are very well described, particularly in weakly correlated cases. Self-consistency dramatically improves agreement with experiment, and is sometimes essential. Discrepancies with experiment are systematic, and can be explained in terms of approximations made.

  4. On the Reliability of Optimization Results for Trigeneration Systems in Buildings, in the Presence of Price Uncertainties and Erroneous Load Estimation

    Antonio Piacentino


    Full Text Available Cogeneration and trigeneration plants are widely recognized as promising technologies for increasing energy efficiency in buildings. However, their overall potential is scarcely exploited, due to the difficulties in achieving economic viability and the risk of investment related to uncertainties in future energy loads and prices. Several stochastic optimization models have been proposed in the literature to account for uncertainties, but these instruments share in a common reliance on user-defined probability functions for each stochastic parameter. Being such functions hard to predict, in this paper an analysis of the influence of erroneous estimation of the uncertain energy loads and prices on the optimal plant design and operation is proposed. With reference to a hotel building, a number of realistic scenarios is developed, exploring all the most frequent errors occurring in the estimation of energy loads and prices. Then, profit-oriented optimizations are performed for the examined scenarios, by means of a deterministic mixed integer linear programming algorithm. From a comparison between the achieved results, it emerges that: (i the plant profitability is prevalently influenced by the average “spark-spread” (i.e., ratio between electricity and fuel price and, secondarily, from the shape of the daily price profiles; (ii the “optimal sizes” of the main components are scarcely influenced by the daily load profiles, while they are more strictly related with the average “power to heat” and “power to cooling” ratios of the building.

  5. VLSI Reliability in Europe

    Verweij, Jan F.


    Several issue's regarding VLSI reliability research in Europe are discussed. Organizations involved in stimulating the activities on reliability by exchanging information or supporting research programs are described. Within one such program, ESPRIT, a technical interest group on IC reliability was

  6. Consistence of Network Filtering Rules

    SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian


    The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.

  7. Measuring Service Reliability Using Automatic Vehicle Location Data

    Zhenliang Ma


    Full Text Available Bus service reliability has become a major concern for both operators and passengers. Buffer time measures are believed to be appropriate to approximate passengers' experienced reliability in the context of departure planning. Two issues with regard to buffer time estimation are addressed, namely, performance disaggregation and capturing passengers’ perspectives on reliability. A Gaussian mixture models based method is applied to disaggregate the performance data. Based on the mixture models distribution, a reliability buffer time (RBT measure is proposed from passengers’ perspective. A set of expected reliability buffer time measures is developed for operators by using different spatial-temporal levels combinations of RBTs. The average and the latest trip duration measures are proposed for passengers that can be used to choose a service mode and determine the departure time. Using empirical data from the automatic vehicle location system in Brisbane, Australia, the existence of mixture service states is verified and the advantage of mixture distribution model in fitting travel time profile is demonstrated. Numerical experiments validate that the proposed reliability measure is capable of quantifying service reliability consistently, while the conventional ones may provide inconsistent results. Potential applications for operators and passengers are also illustrated, including reliability improvement and trip planning.

  8. Predicting Reliability of Tactical Network Using RBFNN

    WANG Xiao-kai; HOU Chao-zhen


    A description of the reliability evaluation of tactical network is given, which reflects not only the non-reliable factors of nodes and links but also the factors of network topological structure. On the basis of this description, a reliability prediction model and its algorithms are put forward based on the radial basis function neural network (RBFNN) for the tactical network. This model can carry out the non-linear mapping relationship between the network topological structure, the nodes reliabilities, the links reliabilities and the reliability of network. The results of simulation prove the effectiveness of this method in the reliability and the connectivity prediction for tactical network.

  9. Construct Validity and Reliability of the Ethical Behavior Rating Scale.

    Hill, Gloria; Swanson, H. Lee


    Results of factor and correlational analyses of the Ethical Behavior Rating Scale (EBRS) are reported. The test-retest method and internal consistency estimates yielded reliability coefficients. Construct validity was determined by correlating the EBRS with items from the Ethical Reasoning Inventory. The EBRS reflects the behavioral aspects of…

  10. Reliability and construction control

    Sherif S. AbdelSalam


    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  11. [Selection of a statistical model for evaluation of the reliability of the results of toxicological analyses. I. Discussion on selected statistical models for evaluation of the systems of control of the results of toxicological analyses].

    Antczak, K; Wilczyńska, U


    2 statistical models for evaluation of toxicological studies results have been presented. Model I. after R. Hoschek and H. J. Schittke (2) involves: 1. Elimination of the values deviating from most results-by Grubbs' method (2). 2. Analysis of the differences between the results obtained by the participants of the action and tentatively assumed value. 3. Evaluation of significant differences between the reference value and average value for a given series of measurements. 4. Thorough evaluation of laboratories based on evaluation coefficient fx. Model II after Keppler et al. As a criterion for evaluating the results the authors assumed the median. Individual evaluation of laboratories was performed on the basis of: 1. Adjusted test "t" 2. Linear regression test.

  12. Reliability Generalization: "Lapsus Linguae"

    Smith, Julie M.


    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  13. Stable functional networks exhibit consistent timing in the human brain.

    Chapeton, Julio I; Inati, Sara K; Zaghloul, Kareem A


    Despite many advances in the study of large-scale human functional networks, the question of timing, stability, and direction of communication between cortical regions has not been fully addressed. At the cellular level, neuronal communication occurs through axons and dendrites, and the time required for such communication is well defined and preserved. At larger spatial scales, however, the relationship between timing, direction, and communication between brain regions is less clear. Here, we use a measure of effective connectivity to identify connections between brain regions that exhibit communication with consistent timing. We hypothesized that if two brain regions are communicating, then knowledge of the activity in one region should allow an external observer to better predict activity in the other region, and that such communication involves a consistent time delay. We examine this question using intracranial electroencephalography captured from nine human participants with medically refractory epilepsy. We use a coupling measure based on time-lagged mutual information to identify effective connections between brain regions that exhibit a statistically significant increase in average mutual information at a consistent time delay. These identified connections result in sparse, directed functional networks that are stable over minutes, hours, and days. Notably, the time delays associated with these connections are also highly preserved over multiple time scales. We characterize the anatomic locations of these connections, and find that the propagation of activity exhibits a preferred posterior to anterior temporal lobe direction, consistent across participants. Moreover, networks constructed from connections that reliably exhibit consistent timing between anatomic regions demonstrate features of a small-world architecture, with many reliable connections between anatomically neighbouring regions and few long range connections. Together, our results demonstrate

  14. Reliability estimation using kriging metamodel

    Cho, Tae Min; Ju, Byeong Hyeon; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)


    In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.

  15. Reliability-Based Code Calibration

    Faber, M.H.; Sørensen, John Dalsgaard


    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  16. Reshaping the Science of Reliability with the Entropy Function

    Paolo Rocchi


    Full Text Available The present paper revolves around two argument points. As first, we have observed a certain parallel between the reliability of systems and the progressive disorder of thermodynamical systems; and we import the notion of reversibility/irreversibility into the reliability domain. As second, we note that the reliability theory is a very active area of research which although has not yet become a mature discipline. This is due to the majority of researchers who adopt the inductive logic instead of the deductive logic typical of mature scientific sectors. The deductive approach was inaugurated by Gnedenko in the reliability domain. We mean to continue Gnedenko’s work and we use the Boltzmann-like entropy to pursue this objective. This paper condenses the papers published in the past decade which illustrate the calculus of the Boltzmann-like entropy. It is demonstrated how the every result complies with the deductive logic and are consistent with Gnedenko’s achievements.

  17. Overcoming some limitations of imprecise reliability models

    Kozine, Igor; Krymsky, Victor


    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time...... to failure which is in reality a rather arbitrary value. The practical meaning of the models of this kind is brought to question. We suggest an approach that overcomes the issue of having to impose an upper bound on time to failure and makes the calculated lower and upper reliability measures more precise....... The main assumption consists in that failure rate is bounded. Langrage method is used to solve the non-linear program. Finally, an example is provided....

  18. β2-1 Fructan supplementation alters host immune responses in a manner consistent with increased exposure to microbial components: results from a double-blinded, randomised, cross-over study in healthy adults.

    Clarke, Sandra T; Green-Johnson, Julia M; Brooks, Stephen P J; Ramdath, D Dan; Bercik, Premysl; Avila, Christian; Inglis, G Douglas; Green, Judy; Yanke, L Jay; Selinger, L Brent; Kalmokoff, Martin


    β2-1 Fructans are purported to improve health by stimulating growth of colonic bifidobacteria, increasing host resistance to pathogens and stimulating the immune system. However, in healthy adults, the benefits of supplementation remain undefined. Adults (thirteen men, seventeen women) participated in a double-blinded, placebo-controlled, randomised, cross-over study consisting of two 28-d treatments separated by a 14-d washout period. Subjects' regular diets were supplemented with β2-1 fructan or placebo (maltodextrin) at 3×5 g/d. Fasting blood and 1-d faecal collections were obtained at the beginning and at the end of each phase. Blood was analysed for clinical, biochemical and immunological variables. Determinations of well-being and general health, gastrointestinal (GI) symptoms, regularity, faecal SCFA content, residual faecal β2-1 fructans and faecal bifidobacteria content were undertaken. β2-1 Fructan supplementation had no effect on blood lipid or cholesterol concentrations or on circulating lymphocyte and macrophage numbers, but significantly increased serum lipopolysaccharide, faecal SCFA, faecal bifidobacteria and indigestion. With respect to immune function, β2-1 fructan supplementation increased serum IL-4, circulating percentages of CD282+/TLR2+ myeloid dendritic cells and ex vivo responsiveness to a toll-like receptor 2 agonist. β2-1 Fructans also decreased serum IL-10, but did not affect C-reactive protein or serum/faecal Ig concentrations. No differences in host well-being were associated with either treatment, although the self-reported incidence of GI symptoms and headaches increased during the β2-1 fructan phase. Although β2-1 fructan supplementation increased faecal bifidobacteria, this change was not directly related to any of the determined host parameters.

  19. Current Status Of Velocity Field Surveys: A Consistency Check

    Sarkar, D; Watkins, R; Sarkar, Devdeep; Feldman, Hume A.


    We present a statistical analysis comparing the bulk--flow measurements for six recent peculiar velocity surveys, namely, ENEAR, SFI, RFGC, SBF and the Mark III singles and group catalogs. We study whether the bulk--flow estimates are consistent with each other and construct the full three dimensional bulk--flow vectors. The method we discuss could be used to test the consistency of all velocity field surveys. We show that although these surveys differ in their geometry and measurement errors, their bulk flow vectors are expected to be highly correlated and in fact show impressive agreement in all cases. Our results suggest that even though the surveys we study target galaxies of different morphology and use different distance measures, they all reliably reflect the same underlying large-scale flow.

  20. Low body weight and type of protease inhibitor predict discontinuation and treatment-limiting adverse drug reactions among HIV-infected patients starting a protease inhibitor regimen: consistent results from a randomized trial and an observational cohort

    Kirk, O; Gerstoft, J; Pedersen, C;


    OBJECTIVES: To assess predictors for discontinuation and treatment-limiting adverse drug reactions (TLADR) among patients starting their first protease inhibitor (PI). METHODS: Data on patients starting a PI regimen (indinavir, ritonavir, ritonavir/saquinavir and saquinavir hard gel...... was associated with a three- to sixfold higher risk of TLADR relative to other PI regimens. Very similar results were documented in RAS [RH for body weight was 1.18 (1.07-1.29)]. CONCLUSIONS: Nearly half of the patients stopped treatment with the initial PI, most commonly as a result of adverse drug reactions...... risk factors for treatment discontinuation and TLADR in both groups. In OBC, the risk of developing TLADR increased by 12% per 5 kg lower body weight when starting the PI regimen [the relative hazard (RH) was 1.12 (95% confidence interval: 1.05-1.19) per 5 kg lighter], and starting ritonavir...

  1. Assuring reliability program effectiveness.

    Ball, L. W.


    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  2. The Accelerator Reliability Forum

    Lüdeke, Andreas; Giachino, R


    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  3. Low body weight and type of protease inhibitor predict discontinuation and treatment-limiting adverse drug reactions among HIV-infected patients starting a protease inhibitor regimen: consistent results from a randomized trial and an observational cohort

    Kirk, O; Gerstoft, J; Pedersen, C


    therapy within less than 2 years. In both populations TLADR were the most common reason for discontinuation. The incidence of TLADR in RAS was: 8.5 (indinavir), 66.0 (ritonavir), 15.6 (saquinavir hard gel) per 100 person-years of follow-up (P Body weight and type of PI initiated were independent...... risk factors for treatment discontinuation and TLADR in both groups. In OBC, the risk of developing TLADR increased by 12% per 5 kg lower body weight when starting the PI regimen [the relative hazard (RH) was 1.12 (95% confidence interval: 1.05-1.19) per 5 kg lighter], and starting ritonavir...... was associated with a three- to sixfold higher risk of TLADR relative to other PI regimens. Very similar results were documented in RAS [RH for body weight was 1.18 (1.07-1.29)]. CONCLUSIONS: Nearly half of the patients stopped treatment with the initial PI, most commonly as a result of adverse drug reactions...

  4. Enlightenment on Computer Network Reliability From Transportation Network Reliability

    Hu Wenjun; Zhou Xizhao


    Referring to transportation network reliability problem, five new computer network reliability definitions are proposed and discussed. They are computer network connectivity reliability, computer network time reliability, computer network capacity reliability, computer network behavior reliability and computer network potential reliability. Finally strategies are suggested to enhance network reliability.

  5. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    Kozine, I.; Christensen, P.; Winther-Jensen, M.


    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessing wind turbine availability while ranking the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all the components and systems, especially the safety system. The report consists of a description of the theoretical foundation of the reliability and availability analyses and of sections devoted to the development of the WTB reliability models as well as a description of the features of the database and software developed. The project comprises analysis of WTBs NM 600/44, 600/48, 750/44 and 750/48, all of which have similar safety systems. The database was established with Microsoft Access Database Management System, the software for reliability and availability assessments was created with Visual Basic. (au)

  6. Reliability Assessment of Wind Turbines

    Sørensen, John Dalsgaard


    Wind turbines can be considered as structures that are in between civil engineering structures and machines since they consist of structural components and many electrical and machine components together with a control system. Further, a wind turbine is not a one-of-a-kind structure...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  7. Reliability assessment of Wind turbines

    Sørensen, John Dalsgaard


    Wind turbines can be considered as structures that are in between civil engineering structures and machines since they consist of structural components and many electrical and machine components together with a control system. Further, a wind turbine is not a one-of-a-kind structure...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  8. Human Reliability Program Overview

    Bodin, Michael


    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  9. Consistency of EEG source localization and connectivity estimates.

    Mahjoory, Keyvan; Nikulin, Vadim V; Botrel, Loïc; Linkenkaer-Hansen, Klaus; Fato, Marco M; Haufe, Stefan


    As the EEG inverse problem does not have a unique solution, the sources reconstructed from EEG and their connectivity properties depend on forward and inverse modeling parameters such as the choice of an anatomical template and electrical model, prior assumptions on the sources, and further implementational details. In order to use source connectivity analysis as a reliable research tool, there is a need for stability across a wider range of standard estimation routines. Using resting state EEG recordings of N=65 participants acquired within two studies, we present the first comprehensive assessment of the consistency of EEG source localization and functional/effective connectivity metrics across two anatomical templates (ICBM152 and Colin27), three electrical models (BEM, FEM and spherical harmonics expansions), three inverse methods (WMNE, eLORETA and LCMV), and three software implementations (Brainstorm, Fieldtrip and our own toolbox). Source localizations were found to be more stable across reconstruction pipelines than subsequent estimations of functional connectivity, while effective connectivity estimates where the least consistent. All results were relatively unaffected by the choice of the electrical head model, while the choice of the inverse method and source imaging package induced a considerable variability. In particular, a relatively strong difference was found between LCMV beamformer solutions on one hand and eLORETA/WMNE distributed inverse solutions on the other hand. We also observed a gradual decrease of consistency when results are compared between studies, within individual participants, and between individual participants. In order to provide reliable findings in the face of the observed variability, additional simulations involving interacting brain sources are required. Meanwhile, we encourage verification of the obtained results using more than one source imaging procedure. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Too Reliable to Be True? Response Bias as a Potential Source of Inflation in Paper-and-Pencil Questionnaire Reliability

    Eyal Peer


    Full Text Available When respondents answer paper-and-pencil (PP questionnaires, they sometimes modify their responses to correspond to previously answered items. As a result, this response bias might artificially inflate the reliability of PP questionnaires. We compared the internal consistency of PP questionnaires to computerized questionnaires that presented a different number of items on a computer screen simultaneously. Study 1 showed that a PP questionnaire's internal consistency was higher than that of the same questionnaire presented on a computer screen with one, two or four questions per screen. Study 2 replicated these findings to show that internal consistency was also relatively high when all questions were shown on one screen. This suggests that the differences found in Study 1 were not due to the difference in presentation medium. Thus, this paper suggests that reliability measures of PP questionnaires might be inflated because of a response bias resulting from participants cross-checking their answers against ones given to previous questions.

  11. Reliable Design Versus Trust

    Berg, Melanie; LaBel, Kenneth A.


    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  12. High SNR Consistent Compressive Sensing

    Kallummil, Sreejith; Kalyani, Sheetal


    High signal to noise ratio (SNR) consistency of model selection criteria in linear regression models has attracted a lot of attention recently. However, most of the existing literature on high SNR consistency deals with model order selection. Further, the limited literature available on the high SNR consistency of subset selection procedures (SSPs) is applicable to linear regression with full rank measurement matrices only. Hence, the performance of SSPs used in underdetermined linear models ...

  13. The Duke University Religion Index (DUREL): validation and reliability of the Farsi version.

    Hafizi, Sina; Memari, Amir Hossein; Pakrah, Mohammad; Mohebi, Farnam; Saghazadeh, Amene; Koenig, Harold G


    This study examined the validation and reliability of the Farsi version of the Duke University Religion Index (FDUREL), a brief measure designed to evaluate the primary dimensions of religiosity. The study was conducted in two phases. In the first phase, after translation of the original version of DUREL by using standard forward-backward translation, the FDUREL was administered to 427 medical students at different training levels. Reliability of the FDUREL was assessed by internal consistency and test-retest reliability. Principal components factor analysis was employed to assess the construct validity of the measure. In the second phase, 557 medical students were asked to fill out the FDUREL and Hoge Intrinsic Religiosity Scale to examine concurrent validity. The FDUREL was unidimensional and had good internal consistency and test-retest reliability. Results suggest that the FDUREL is a reliable and valid measure of religiosity in Farsi-speaking populations.

  14. Software Reliability Experimentation and Control

    Kai-Yuan Cai


    This paper classifies software researches as theoretical researches, experimental researches, and engineering researches, and is mainly concerned with the experimental researches with focus on software reliability experimentation and control. The state-of-the-art of experimental or empirical studies is reviewed. A new experimentation methodology is proposed, which is largely theory discovering oriented. Several unexpected results of experimental studies are presented to justify the importance of software reliability experimentation and control. Finally, a few topics that deserve future investigation are identified.

  15. Viking Lander reliability program

    Pilny, M. J.


    The Viking Lander reliability program is reviewed with attention given to the development of the reliability program requirements, reliability program management, documents evaluation, failure modes evaluation, production variation control, failure reporting and correction, and the parts program. Lander hardware failures which have occurred during the mission are listed.

  16. Inter-Observer Reliability of DSM-5 Substance Use Disorders*

    Denis, Cécile M.; Gelernter, Joel; Hart, Amy B.; Kranzler, Henry R.


    Aims Although studies have examined the impact of changes made in DSM-5 on the estimated prevalence of substance use disorder (SUD) diagnoses, there is limited evidence of the reliability of DSM-5 SUDs. We evaluated the inter-observer reliability of four DSM-5 SUDs in a sample in which we had previously evaluated the reliability of DSM-IV diagnoses, allowing us to compare the two systems. Methods Two different interviewers each assessed 173 subjects over a 2-week period using the Semi-Structured Assessment for Drug Dependence and Alcoholism (SSADDA). Using the percent agreement and kappa (κ) coefficient, we examined the reliability of DSM-5 lifetime alcohol, opioid, cocaine, and cannabis use disorders, which we compared to that of SSADDA-derived DSM-IV SUD diagnoses. We also assessed the effect of additional lifetime SUD and lifetime mood or anxiety disorder diagnoses on the reliability of the DSM-5 SUD diagnoses. Results Reliability was good to excellent for the four disorders, with κ values ranging from 0.65 to 0.94. Agreement was consistently lower for SUDs of mild severity than for moderate or severe disorders. DSM-5 SUD diagnoses showed greater reliability than DSM-IV diagnoses of abuse or dependence or dependence only. Co-occurring SUD and lifetime mood or anxiety disorders exerted a modest effect on the reliability of the DSM-5 SUD diagnoses. Conclusions For alcohol, opioid, cocaine and cannabis use disorders, DSM-5 criteria and diagnoses are at least as reliable as those of DSM-IV. PMID:26048641

  17. Still just 1 g: Consistent results from five test batteries

    Johnson, W.; te Nijenhuis, J.; Bouchard, T.J.Jr.


    In a recent paper, Johnson, Bouchard, Krueger,McGue, and Gottesman (2004) addressed a long-standing debate in psychology by demonstrating that the g factors derived from three test batteries administered to a single group of individuals were completely correlated. This finding provided evidence for

  18. A bayesian belief network for reliability assessment

    Gran, Bjoern Axel; Helminen, Atte


    The research programme at the Halden Project on software assessment is argumented through a joint project with VVT Automation. The objective of this co-operative project is to combine previous presented Bayesian Belief Networks for a software safety standard, with BBNs on the reliability estimation of software based digital systems. The results on applying BBN methodology with a software safety standard is based upon previous research by the Halden Project, while the results on the reliability estimation is based on a Master's Thesis by Helminen. The report should be considered as a progress report in the more long-term activity on the use of BBNs as support for safety assessment of programmable systems. In this report it is discussed how the two approaches can be merged together into one Bayesian Network, and the problems with merging are pinpointed. The report also presents and discusses the approaches applied by the Halden Project and VTT, including the differences in the expert judgement of the parameters used in the Bayesian Network. Finally, the report gives some experimental results based on observations from applying the method for an evaluation of a real, safety related programmable system that has been developed according to the avionic standard DO-178B. This demonstrates how hard and soft evidences can be combined for a reliability assessment. The use of Bayesian Networks provides a framework, combining consistent application of probability calculus with the ability to model complex structures, as e.g. standards, as a simple understandable network, where all possible evidence can be introduced to the reliability estimation in a compatible way. (Author)

  19. Reliability and validity in medical research


    Scientists commonly refer to study instruments duringmedical research. In fact, the reliability and validity issuesgo beyond psychometric studies and can be linked withany kind measurements. In this study we aimed to explainthe reliability and validity concepts by giving examples.It is possible to evaluate the reliability and validity of aninstrument by scientific methods. If we speak of reliability,we have to mention stability (having the same results inrepeated measurements from the same sa...

  20. Reliability analysis in intelligent machines

    Mcinroy, John E.; Saridis, George N.


    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  1. VCSEL reliability: a user's perspective

    McElfresh, David K.; Lopez, Leoncio D.; Melanson, Robert; Vacar, Dan


    VCSEL arrays are being considered for use in interconnect applications that require high speed, high bandwidth, high density, and high reliability. In order to better understand the reliability of VCSEL arrays, we initiated an internal project at SUN Microsystems, Inc. In this paper, we present preliminary results of an ongoing accelerated temperature-humidity-bias stress test on VCSEL arrays from several manufacturers. This test revealed no significant differences between the reliability of AlGaAs, oxide confined VCSEL arrays constructed with a trench oxide and mesa for isolation. This test did find that the reliability of arrays needs to be measured on arrays and not be estimated with the data from singulated VCSELs as is a common practice.

  2. Coordinating user interfaces for consistency

    Nielsen, Jakob


    In the years since Jakob Nielsen's classic collection on interface consistency first appeared, much has changed, and much has stayed the same. On the one hand, there's been exponential growth in the opportunities for following or disregarding the principles of interface consistency-more computers, more applications, more users, and of course the vast expanse of the Web. On the other, there are the principles themselves, as persistent and as valuable as ever. In these contributed chapters, you'll find details on many methods for seeking and enforcing consistency, along with bottom-line analys

  3. Supply chain reliability modelling

    Eugen Zaitsev


    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  4. Seismic reliability analysis of urban water distribution network

    Li Jie; Wei Shulin; Liu Wei


    An approach to analyze the seismic reliability of water distribution networks by combining a hydraulic analysis with a first-order reliability method (FORM), is proposed in this paper.The hydraulic analysis method for normal conditions is modified to accommodate the special conditions necessary to perform a seismic hydraulic analysis. In order to calculate the leakage area and leaking flow of the pipelines in the hydraulic analysis method, a new leakage model established from the seismic response analysis of buried pipelines is presented. To validate the proposed approach, a network with 17 nodes and 24 pipelines is investigated in detail. The approach is also applied to an actual project consisting of 463 nodes and 767pipelines. Thee results show that the proposed approach achieves satisfactory results in analyzing the seismic reliability of large-scale water distribution networks.

  5. On reliability optimization for power generation systems


    The reliability level of a power generation system is an important problem which is concerned by both electricity producers and electricity consumers. Why? It is known that the high reliability level may result in additional utility cost, and the low reliability level may result in additional consumer's cost, so the optimum reliability level should be determined such that the total cost can reach its minimum. Four optimization models for power generation system reliability are constructed, and the proven efficient solutions for these models are also given.

  6. Process Fairness and Dynamic Consistency

    S.T. Trautmann (Stefan); P.P. Wakker (Peter)


    textabstractAbstract: When process fairness deviates from outcome fairness, dynamic inconsistencies can arise as in nonexpected utility. Resolute choice (Machina) can restore dynamic consistency under nonexpected utility without using Strotz's precommitment. It can similarly justify dynamically

  7. Fatigue Reliability of Gas Turbine Engine Structures

    Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.


    The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.

  8. Multi-mode reliability-based design of horizontal curves.

    Essa, Mohamed; Sayed, Tarek; Hussein, Mohamed


    Recently, reliability analysis has been advocated as an effective approach to account for uncertainty in the geometric design process and to evaluate the risk associated with a particular design. In this approach, a risk measure (e.g. probability of noncompliance) is calculated to represent the probability that a specific design would not meet standard requirements. The majority of previous applications of reliability analysis in geometric design focused on evaluating the probability of noncompliance for only one mode of noncompliance such as insufficient sight distance. However, in many design situations, more than one mode of noncompliance may be present (e.g. insufficient sight distance and vehicle skidding at horizontal curves). In these situations, utilizing a multi-mode reliability approach that considers more than one failure (noncompliance) mode is required. The main objective of this paper is to demonstrate the application of multi-mode (system) reliability analysis to the design of horizontal curves. The process is demonstrated by a case study of Sea-to-Sky Highway located between Vancouver and Whistler, in southern British Columbia, Canada. Two noncompliance modes were considered: insufficient sight distance and vehicle skidding. The results show the importance of accounting for several noncompliance modes in the reliability model. The system reliability concept could be used in future studies to calibrate the design of various design elements in order to achieve consistent safety levels based on all possible modes of noncompliance.

  9. A Method of Reliability Allocation of a Complicated Large System

    WANG Zhi-sheng; QIN Yuan-yuan; WANG Dao-bo


    Aiming at the problem of reliability allocation for a complicated large system, a new thought is brought up. Reliability allocation should be a kind of decision-making behavior; therefore the more information is used when apportioning a reliability index, the more reasonable an allocation is obtained. Reliability allocation for a complicated large system consists of two processes, the first one is a reliability information reporting process fromt bottom to top, and the other one is a reliability index apportioning process from top to bottom. By a typical example, we illustrate the concrete process of reliability allocation algorithms.

  10. Putting Consistent Theories Together in Institutions



    The problem of putting consistent theories together in institutions is discussed.A general necessary condition for consistency of the resulting theory is carried out,and some sufficient conditions are given for diagrams of theories in which shapes are tree bundles or directed graphs.Moreover,some transformations from complicated cases to simple ones are established.

  11. Time-consistent and market-consistent evaluations

    Pelsser, A.; Stadje, M.A.


    We consider evaluation methods for payoffs with an inherent financial risk as encountered for instance for portfolios held by pension funds and insurance companies. Pricing such payoffs in a way consistent to market prices typically involves combining actuarial techniques with methods from mathemati

  12. Reliability and validity of the Wolfram Unified Rating Scale (WURS

    Nguyen Chau


    Full Text Available Abstract Background Wolfram syndrome (WFS is a rare, neurodegenerative disease that typically presents with childhood onset insulin dependent diabetes mellitus, followed by optic atrophy, diabetes insipidus, deafness, and neurological and psychiatric dysfunction. There is no cure for the disease, but recent advances in research have improved understanding of the disease course. Measuring disease severity and progression with reliable and validated tools is a prerequisite for clinical trials of any new intervention for neurodegenerative conditions. To this end, we developed the Wolfram Unified Rating Scale (WURS to measure the severity and individual variability of WFS symptoms. The aim of this study is to develop and test the reliability and validity of the Wolfram Unified Rating Scale (WURS. Methods A rating scale of disease severity in WFS was developed by modifying a standardized assessment for another neurodegenerative condition (Batten disease. WFS experts scored the representativeness of WURS items for the disease. The WURS was administered to 13 individuals with WFS (6-25 years of age. Motor, balance, mood and quality of life were also evaluated with standard instruments. Inter-rater reliability, internal consistency reliability, concurrent, predictive and content validity of the WURS were calculated. Results The WURS had high inter-rater reliability (ICCs>.93, moderate to high internal consistency reliability (Cronbach’s α = 0.78-0.91 and demonstrated good concurrent and predictive validity. There were significant correlations between the WURS Physical Assessment and motor and balance tests (rs>.67, ps>.76, ps=-.86, p=.001. The WURS demonstrated acceptable content validity (Scale-Content Validity Index=0.83. Conclusions These preliminary findings demonstrate that the WURS has acceptable reliability and validity and captures individual differences in disease severity in children and young adults with WFS.

  13. Reliability and safety engineering

    Verma, Ajit Kumar; Karanki, Durga Rao


    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  14. Measurement System Reliability Assessment

    Kłos Ryszard


    Full Text Available Decision-making in problem situations is based on up-to-date and reliable information. A great deal of information is subject to rapid changes, hence it may be outdated or manipulated and enforce erroneous decisions. It is crucial to have the possibility to assess the obtained information. In order to ensure its reliability it is best to obtain it with an own measurement process. In such a case, conducting assessment of measurement system reliability seems to be crucial. The article describes general approach to assessing reliability of measurement systems.

  15. Reliability of fluid systems

    Kopáček Jaroslav


    Full Text Available This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element, which is seen as a random variable and their data (values can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.

  16. Circuit design for reliability

    Cao, Yu; Wirth, Gilson


    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  17. Market-consistent actuarial valuation

    Wüthrich, Mario V


    This is the third edition of this well-received textbook, presenting powerful methods for measuring insurance liabilities and assets in a consistent way, with detailed mathematical frameworks that lead to market-consistent values for liabilities. Topics covered are stochastic discounting with deflators, valuation portfolio in life and non-life insurance, probability distortions, asset and liability management, financial risks, insurance technical risks, and solvency. Including updates on recent developments and regulatory changes under Solvency II, this new edition of Market-Consistent Actuarial Valuation also elaborates on different risk measures, providing a revised definition of solvency based on industry practice, and presents an adapted valuation framework which takes a dynamic view of non-life insurance reserving risk.

  18. Consistent Histories in Quantum Cosmology

    Craig, David A; 10.1007/s10701-010-9422-6


    We illustrate the crucial role played by decoherence (consistency of quantum histories) in extracting consistent quantum probabilities for alternative histories in quantum cosmology. Specifically, within a Wheeler-DeWitt quantization of a flat Friedmann-Robertson-Walker cosmological model sourced with a free massless scalar field, we calculate the probability that the univese is singular in the sense that it assumes zero volume. Classical solutions of this model are a disjoint set of expanding and contracting singular branches. A naive assessment of the behavior of quantum states which are superpositions of expanding and contracting universes may suggest that a "quantum bounce" is possible i.e. that the wave function of the universe may remain peaked on a non-singular classical solution throughout its history. However, a more careful consistent histories analysis shows that for arbitrary states in the physical Hilbert space the probability of this Wheeler-DeWitt quantum universe encountering the big bang/crun...

  19. The Importance of being consistent

    Wasserman, Adam; Jiang, Kaili; Kim, Min-Cheol; Sim, Eunji; Burke, Kieron


    We review the role of self-consistency in density functional theory. We apply a recent analysis to both Kohn-Sham and orbital-free DFT, as well as to Partition-DFT, which generalizes all aspects of standard DFT. In each case, the analysis distinguishes between errors in approximate functionals versus errors in the self-consistent density. This yields insights into the origins of many errors in DFT calculations, especially those often attributed to self-interaction or delocalization error. In many classes of problems, errors can be substantially reduced by using `better' densities. We review the history of these approaches, many of their applications, and give simple pedagogical examples.

  20. Reliability of the NINDS Myotatic Reflex Scale.

    Litvan, I; Mangone, C A; Werden, W; Bueri, J A; Estol, C J; Garcea, D O; Rey, R C; Sica, R E; Hallett, M; Bartko, J J


    The assessment of deep tendon reflexes is useful for localization and diagnosis of neurologic disorders, but only a few studies have evaluated their reliability. We assessed the reliability of four neurologists, instructed in two different countries, in using the National Institute of Neurological Disorders and Stroke (NINDS) Myotatic Reflex Scale. To evaluate the role of training in using the scale, the neurologists randomly and blindly evaluated a total of 80 patients, 40 before and 40 after a training session. Inter- and intraobserver reliability were measured with kappa statistics. Our results showed substantial to near-perfect intraobserver reliability, and moderate-to-substantial interobserver reliability of the NINDS Myotatic Reflex Scale. The reproducibility was better for reflexes in the lower than in the upper extremities. Neither educational background nor the training session influenced the reliability of our results. The NINDS Myotatic Reflex Scale has sufficient reliability to be adopted as a universal scale.

  1. Consistent Stochastic Modelling of Meteocean Design Parameters

    Sørensen, John Dalsgaard; Sterndorff, M. J.


    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...

  2. Metrological Reliability of Medical Devices

    Costa Monteiro, E.; Leon, L. F.


    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  3. Consistent supersymmetric decoupling in cosmology

    Sousa Sánchez, Kepa


    The present work discusses several problems related to the stability of ground states with broken supersymmetry in supergravity, and to the existence and stability of cosmic strings in various supersymmetric models. In particular we study the necessary conditions to truncate consistently a sector o


    Muhammed Emin KAFKAS


    Full Text Available The purpose of this research, Yousof Al-Thibiti (2004 Fan Motivational scale was developed by Turkish adapts and scales to analyze the validity and reliability. The study group was composed that Inonu University studying at different faculty of education between the age of 17-31 494’ü (%54 women and 421’i (%46 male students. Firstly of linguistic equivalence of the scale were examined. Secondly after linguistic equivalence was performed reliability analysis. The total scale internal consistency coefficients for sizes .85 and subscale range of .70-.78, the re-test reliability coefficients were between .79-.89. at the same time, exploratory factor analysis and confirmatory factor analysis is carried out with the scale and found to comply with the data. As a result of scale to measure the status of individuals to participate in sporting activities was found to have an appropriate structure.


    Muhammed Emin KAFKAS


    Full Text Available The aim of this research is to adapt The Sport Imagery Questionnaire (Hall, Munroe-Chandler, Fishburne ve Hall, 2009 into Turkish and to examine its psychometric properties. The research was conducted on 208 female (38.2% and 337 male (61.8% volunteering students aged mostly between 12-16 studying at 1st and 2nd stage of primary schools affiliated to central district of Malatya province, Turkey. First the linguistic equivalence of the scale was tested, which is then followed by validity and reliability studies. Internal consistency coefficients varied between .66-.87 and test-retest reliability coefficients varied between .60-.86. Corrected item-total correlations ranged from .60 to .85. Based on these results the Sport Imagery Questionnaire can be used as a valid and reliable instrument.

  6. Reliability Based Ship Structural Design

    Dogliani, M.; Østergaard, C.; Parmentier, G.;


    with developments of models of load effects and of structural collapse adopted in reliability formulations which aim at calibrating partial safety factors for ship structural design. New probabilistic models of still-water load effects are developed both for tankers and for containerships. New results are presented......This paper deals with the development of different methods that allow the reliability-based design of ship structures to be transferred from the area of research to the systematic application in current design. It summarises the achievements of a three-year collaborative research project dealing...... structure of several tankers and containerships. The results of the reliability analysis were the basis for the definition of a target safety level which was used to asses the partial safety factors suitable for in a new design rules format to be adopted in modern ship structural design. Finally...

  7. Reliability Characteristics of Power Plants

    Zbynek Martinek


    Full Text Available This paper describes the phenomenon of reliability of power plants. It gives an explanation of the terms connected with this topic as their proper understanding is important for understanding the relations and equations which model the possible real situations. The reliability phenomenon is analysed using both the exponential distribution and the Weibull distribution. The results of our analysis are specific equations giving information about the characteristics of the power plants, the mean time of operations and the probability of failure-free operation. Equations solved for the Weibull distribution respect the failures as well as the actual operating hours. Thanks to our results, we are able to create a model of dynamic reliability for prediction of future states. It can be useful for improving the current situation of the unit as well as for creating the optimal plan of maintenance and thus have an impact on the overall economics of the operation of these power plants.

  8. Myers-Briggs Type Indicator Score Reliability across Studies: A Meta-Analytic Reliability.

    Capraro, Robert M.; Capraro, Mary Margaret


    Submitted the Myers-Briggs Type Indicator (MBTI) to a descriptive reliability generalization analysis to characterize the variability of measurement error in MBTI scores across administrations. In general the MBTI and its scales yielded scores with strong internal consistency and test-retest reliability estimates. (SLD)

  9. LED system reliability

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.


    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific ex

  10. Principles of Bridge Reliability

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated...

  11. Improving machinery reliability

    Bloch, Heinz P


    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  12. Hawaii Electric System Reliability

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  13. Hawaii electric system reliability.

    Silva Monroy, Cesar Augusto; Loose, Verne William


    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  14. Validation of Land Cover Products Using Reliability Evaluation Methods

    Wenzhong Shi


    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  15. Web server's reliability improvements using recurrent neural networks

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan


    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (the...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  16. Notes on numerical reliability of several statistical analysis programs

    Landwehr, J.M.; Tasker, Gary D.


    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  17. Web server's reliability improvements using recurrent neural networks

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan


    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  18. Chapter 9: Reliability

    Algora, Carlos; Espinet-Gonzalez, Pilar; Vazquez, Manuel; Bosco, Nick; Miller, David; Kurtz, Sarah; Rubio, Francisca; McConnell,Robert


    This chapter describes the accumulated knowledge on CPV reliability with its fundamentals and qualification. It explains the reliability of solar cells, modules (including optics) and plants. The chapter discusses the statistical distributions, namely exponential, normal and Weibull. The reliability of solar cells includes: namely the issues in accelerated aging tests in CPV solar cells, types of failure and failures in real time operation. The chapter explores the accelerated life tests, namely qualitative life tests (mainly HALT) and quantitative accelerated life tests (QALT). It examines other well proven and experienced PV cells and/or semiconductor devices, which share similar semiconductor materials, manufacturing techniques or operating conditions, namely, III-V space solar cells and light emitting diodes (LEDs). It addresses each of the identified reliability issues and presents the current state of the art knowledge for their testing and evaluation. Finally, the chapter summarizes the CPV qualification and reliability standards.

  19. Self-consistent triaxial models

    Sanders, Jason L


    We present self-consistent triaxial stellar systems that have analytic distribution functions (DFs) expressed in terms of the actions. These provide triaxial density profiles with cores or cusps at the centre. They are the first self-consistent triaxial models with analytic DFs suitable for modelling giant ellipticals and dark haloes. Specifically, we study triaxial models that reproduce the Hernquist profile from Williams & Evans (2015), as well as flattened isochrones of the form proposed by Binney (2014). We explore the kinematics and orbital structure of these models in some detail. The models typically become more radially anisotropic on moving outwards, have velocity ellipsoids aligned in Cartesian coordinates in the centre and aligned in spherical polar coordinates in the outer parts. In projection, the ellipticity of the isophotes and the position angle of the major axis of our models generally changes with radius. So, a natural application is to elliptical galaxies that exhibit isophote twisting....

  20. Long-term reliability of the visual EEG Poffenberger paradigm.

    Friedrich, Patrick; Ocklenburg, Sebastian; Mochalski, Lisa; Schlüter, Caroline; Güntürkün, Onur; Genc, Erhan


    The Poffenberger paradigm is a simple perception task that is used to estimate the speed of information transfer between the two hemispheres, the so-called interhemispheric transfer time (IHTT). Although the original paradigm is a behavioral task, it can be combined with electroencephalography (EEG) to assess the underlying neurophysiological processes during task execution. While older studies have supported the validity of both paradigms for investigating interhemispheric interactions, their long-term reliability has not been assessed systematically before. The present study aims to fill this gap by determining both internal consistency and long-term test-retest reliability of IHTTs produced by using the two different versions of the Poffenberger paradigm in a sample of 26 healthy subjects. The results show high reliability for the EEG Poffenberger paradigm. In contrast, reliability measures for the behavioral Poffenberger paradigm were low. Hence, our results indicate that electrophysiological measures of interhemispheric transfer are more reliable than behavioral measures; the later should be used with caution in research investigating inter-individual differences of neurocognitive measures. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Measuring reliability under epistemic uncertainty:Review on non-probabilistic reliability metrics

    Kang Rui; Zhang Qingyuan; Zeng Zhiguo; Enrico Zio; Li Xiaoyang


    In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability) and uncertainty-theory-based reliability metrics (belief reliability). It is pointed out that a qualified reli-ability metric that is able to consider the effect of epistemic uncertainty needs to (1) compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2) satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  2. On Modal Refinement and Consistency

    Nyman, Ulrik; Larsen, Kim Guldstrand; Wasowski, Andrzej


    Almost 20 years after the original conception, we revisit several fundamental question about modal transition systems. First, we demonstrate the incompleteness of the standard modal refinement using a counterexample due to Hüttel. Deciding any refinement, complete with respect to the standard...... notions of implementation, is shown to be computationally hard (co-NP hard). Second, we consider four forms of consistency (existence of implementations) for modal specifications. We characterize each operationally, giving algorithms for deciding, and for synthesizing implementations, together...

  3. Tri-Sasakian consistent reduction

    Cassani, Davide


    We establish a universal consistent Kaluza-Klein truncation of M-theory based on seven-dimensional tri-Sasakian structure. The four-dimensional truncated theory is an N=4 gauged supergravity with three vector multiplets and a non-abelian gauge group, containing the compact factor SO(3). Consistency follows from the fact that our truncation takes exactly the same form as a left-invariant reduction on a specific coset manifold, and we show that the same holds for the various universal consistent truncations recently put forward in the literature. We describe how the global symmetry group SL(2,R) x SO(6,3) is embedded in the symmetry group E7(7) of maximally supersymmetric reductions, and make the connection with the approach of Exceptional Generalized Geometry. Vacuum AdS4 solutions spontaneously break the amount of supersymmetry from N=4 to N=3,1 or 0, and the spectrum contains massive modes. We find a subtruncation to minimal N=3 gauged supergravity as well as an N=1 subtruncation to the SO(3)-invariant secto...

  4. Reliability of movement control tests in the lumbar spine

    de Bruin Eling D


    Full Text Available Abstract Background Movement control dysfunction [MCD] reduces active control of movements. Patients with MCD might form an important subgroup among patients with non specific low back pain. The diagnosis is based on the observation of active movements. Although widely used clinically, only a few studies have been performed to determine the test reliability. The aim of this study was to determine the inter- and intra-observer reliability of movement control dysfunction tests of the lumbar spine. Methods We videoed patients performing a standardized test battery consisting of 10 active movement tests for motor control in 27 patients with non specific low back pain and 13 patients with other diagnoses but without back pain. Four physiotherapists independently rated test performances as correct or incorrect per observation, blinded to all other patient information and to each other. The study was conducted in a private physiotherapy outpatient practice in Reinach, Switzerland. Kappa coefficients, percentage agreements and confidence intervals for inter- and intra-rater results were calculated. Results The kappa values for inter-tester reliability ranged between 0.24 – 0.71. Six tests out of ten showed a substantial reliability [k > 0.6]. Intra-tester reliability was between 0.51 – 0.96, all tests but one showed substantial reliability [k > 0.6]. Conclusion Physiotherapists were able to reliably rate most of the tests in this series of motor control tasks as being performed correctly or not, by viewing films of patients with and without back pain performing the task.

  5. Reliability Modeling of Wind Turbines

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... the actions should be made and the type of actions requires knowledge on the accumulated damage or degradation state of the wind turbine components. For offshore wind turbines, the action times could be extended due to weather restrictions and result in damage or degradation increase of the remaining...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...

  6. Reliability of quantitative content analyses

    Enschot-van Dijk, R. van


    Reliable coding of stimuli is a daunting task that often yields unsatisfactory results. This paper discusses a case study in which tropes (e.g., metaphors, puns) in TV commercials were analyzed as well the extent and location of verbal and visual anchoring (i.e., explanation) of these tropes. After

  7. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  8. The Reliability of College Grades

    Beatty, Adam S.; Walmsley, Philip T.; Sackett, Paul R.; Kuncel, Nathan R.; Koch, Amanda J.


    Little is known about the reliability of college grades relative to how prominently they are used in educational research, and the results to date tend to be based on small sample studies or are decades old. This study uses two large databases (N > 800,000) from over 200 educational institutions spanning 13 years and finds that both first-year…

  9. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  10. On the Initial State and Consistency Relations

    Berezhiani, Lasha


    We study the effect of the initial state on the consistency conditions for adiabatic perturbations. In order to be consistent with the constraints of General Relativity, the initial state must be diffeomorphism invariant. As a result, we show that initial wavefunctional/density matrix has to satisfy a Slavnov-Taylor identity similar to that of the action. We then investigate the precise ways in which modified initial states can lead to violations of the consistency relations. We find two independent sources of violations: i) the state can include initial non-Gaussianities; ii) even if the initial state is Gaussian, such as a Bogoliubov state, the modified 2-point function can modify the q->0 analyticity properties of the vertex functional and result in violations of the consistency relations.

  11. On the initial state and consistency relations

    Berezhiani, Lasha; Khoury, Justin, E-mail:, E-mail: [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States)


    We study the effect of the initial state on the consistency conditions for adiabatic perturbations. In order to be consistent with the constraints of General Relativity, the initial state must be diffeomorphism invariant. As a result, we show that initial wavefunctional/density matrix has to satisfy a Slavnov-Taylor identity similar to that of the action. We then investigate the precise ways in which modified initial states can lead to violations of the consistency relations. We find two independent sources of violations: i) the state can include initial non-Gaussianities; ii) even if the initial state is Gaussian, such as a Bogoliubov state, the modified 2-point function can modify the q-vector → 0 analyticity properties of the vertex functional and result in violations of the consistency relations.

  12. High reliable internet hosting : a case study of handling unanticipated threats to reliability

    Wijnhoven, A.B.J.M.; Ehrenhard, M.L.; Alink, T.


    Internet hosting is pivotal for reliable access to internet resources. Yet, little research has been done on what threatens a hosting service’s reliability. Our analysis of reliability threatening incidents results in four causes of unreliability: the service’s technical architecture, service employ

  13. Photovoltaic system reliability

    Maish, A.B.; Atcitty, C. [Sandia National Labs., NM (United States); Greenberg, D. [Ascension Technology, Inc., Lincoln Center, MA (United States)] [and others


    This paper discusses the reliability of several photovoltaic projects including SMUD`s PV Pioneer project, various projects monitored by Ascension Technology, and the Colorado Parks project. System times-to-failure range from 1 to 16 years, and maintenance costs range from 1 to 16 cents per kilowatt-hour. Factors contributing to the reliability of these systems are discussed, and practices are recommended that can be applied to future projects. This paper also discusses the methodology used to collect and analyze PV system reliability data.

  14. Structural Reliability Methods

    Ditlevsen, Ove Dalager; Madsen, H. O.

    of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature......The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...

  15. On the consistency of MPS

    Souto-Iglesias, Antonio; González, Leo M; Cercos-Pita, Jose L


    The consistency of Moving Particle Semi-implicit (MPS) method in reproducing the gradient, divergence and Laplacian differential operators is discussed in the present paper. Its relation to the Smoothed Particle Hydrodynamics (SPH) method is rigorously established. The application of the MPS method to solve the Navier-Stokes equations using a fractional step approach is treated, unveiling inconsistency problems when solving the Poisson equation for the pressure. A new corrected MPS method incorporating boundary terms is proposed. Applications to one dimensional boundary value Dirichlet and mixed Neumann-Dirichlet problems and to two-dimensional free-surface flows are presented.

  16. Measuring process and knowledge consistency

    Edwards, Kasper; Jensen, Klaes Ladeby; Haug, Anders


    with a 5 point Liker scale and a corresponding scoring system. Process consistency is measured by using a first-person drawing tool with the respondent in the centre. Respondents sketch the sequence of steps and people they contact when configuring a product. The methodology is tested in one company...... for granted; rather the contrary, and attempting to implement a configuration system may easily ignite a political battle. This is because stakes are high in the sense that the rules and processes chosen may only reflect one part of the practice, ignoring a majority of the employees. To avoid this situation...

  17. Assessing the inter-coder reliability of the Body Type Dictionary (BTD

    Cariola Laura A.


    Full Text Available Computer-assisted content analysis has many advantages compared to a manual scoring system, provided that computerised dictionaries represent valid and reliable measures. This study aimed to assess the inter-coder reliability, alternate- form reliability and scoring consistency of the Body Type Dictionary (BTD (Wilson 2006 based on Fisher and Cleveland’s (1956, 1958 manual body boundary scoring scheme. The results indicated an acceptable inter-coder agreement with barrier and penetration imagery in the sub-sample (N = 53 of manually coded Rorschach responses. Additionally manually coded scores showed an acceptable correlation with the computerised frequency counts, and thus indicating an alternate-form reliability. In the full data set (N = 526, barrier imagery in the Rorschach responses only correlated with the picture response test, showing low scoring consistency, which might disconfirm the notion of body boundary awareness representing a stable personality trait but instead it might be dependent on the level of cognitive dedifferentiation.

  18. Energy based reliable multicast routing protocol for packet forwarding in MANET

    S. Gopinath


    Full Text Available Mobile Ad hoc Network consists of mobile nodes without any assisting infrastructure. Mobility of nodes causes network partition. This leads to heavy overhead and less packet forwarding ratio. In this research work, Residual Energy based Reliable Multicast Routing Protocol (RERMR is proposed to attain more network lifetime and increased packet delivery and forwarding rate. A multicast backbone is constructed to achieve more stability based on node familiarity and trustable loop. Reliable path criterion is estimated to choose best reliable path among all available paths. Data packets will be forwarded once the reliable path is chosen. We have also demonstrated that residual energy of paths aids to provide maximum network lifetime. Based on the simulation results, the proposed work achieves better performance than previous protocols in terms of packet reliability rate, network stability rate, end to end delay, end to end transmission and communication overhead.

  19. Reliable and Energy Efficient Protocol for MANET Multicasting

    Bander H. AlQarni


    Full Text Available A mobile ad hoc network (MANET consists of a self-configured set of portable mobile nodes without any central infrastructure to regulate traffic in the network. These networks present problems such as lack of congestion control, reliability, and energy consumption. In this paper, we present a new model for MANET multicasting called Reliable and Energy Efficient Protocol Depending on Distance and Remaining Energy (REEDDRE. Our proposal is based on a tone system to provide more efficiency and better performance, and it combines solutions over the Medium Access Control (MAC layer. The protocol consists of a new construction method for mobile nodes using a clustering approach that depends on distance and remaining energy to provide more stability and to reduce energy consumption. In addition, we propose an adjustment to the typical multicast flow by adding unicast links between clusters. We further present in our model a technique to provide more reliability based on a busy tone system (RMBTM to reduce excessive control overhead caused by control packets in error recovery. We simulate our proposal using OPNET, and the results show enhancement in terms of reliability, packet delivery ratio (PDR, energy consumption, and throughput.

  20. MultiSIMNRA: A computational tool for self-consistent ion beam analysis using SIMNRA

    Silva, T.F., E-mail: [Instituto de Física da Universidade de São Paulo, Rua do Matão, trav. R 187, 05508-090 São Paulo (Brazil); Rodrigues, C.L. [Instituto de Física da Universidade de São Paulo, Rua do Matão, trav. R 187, 05508-090 São Paulo (Brazil); Mayer, M. [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, D-85748 Garching (Germany); Moro, M.V.; Trindade, G.F.; Aguirre, F.R.; Added, N.; Rizzutto, M.A.; Tabacniks, M.H. [Instituto de Física da Universidade de São Paulo, Rua do Matão, trav. R 187, 05508-090 São Paulo (Brazil)


    Highlights: • MultiSIMNRA enables the self-consistent analysis of multiple ion beam techniques. • Self-consistent analysis enables unequivocal and reliable modeling of the sample. • Four different computational algorithms available for model optimizations. • Definition of constraints enables to include prior knowledge into the analysis. - Abstract: SIMNRA is widely adopted by the scientific community of ion beam analysis for the simulation and interpretation of nuclear scattering techniques for material characterization. Taking advantage of its recognized reliability and quality of the simulations, we developed a computer program that uses multiple parallel sessions of SIMNRA to perform self-consistent analysis of data obtained by different ion beam techniques or in different experimental conditions of a given sample. In this paper, we present a result using MultiSIMNRA for a self-consistent multi-elemental analysis of a thin film produced by magnetron sputtering. The results demonstrate the potentialities of the self-consistent analysis and its feasibility using MultiSIMNRA.

  1. Nonlinear Dynamic Reliability of Coupled Stay Cables and Bridge Tower


    Nonlinear vibration can cause serious problems in long span cable-stayed bridges. When the internal resonance threshold is reached between the excitation frequency and natural frequency,large amplitudes occur in the cable. Based on the current situation of lacking corresponding constraint criteria, a model was presented for analyzing the dynamic reliability of coupling oscillation between the cable and tower in a cable-stayed bridge. First of all, in the case of cable sag, the d'Alembert principle is applied to studying the nonlinear dynamic behavior of the structure, and resonance failure interval of parametric oscillation is calculated accordingly. Then the dynamic reliability model is set up using the JC method. An application of this model has been developed for the preliminary design of one cable-stayed bridge located on Hai River in Tianjin, and time histories analysis as well as reliability indexes have been obtained. When frequency ratio between the cable and tower is approaching 1∶2, the reliability index is 0.98, indicating high failure probability. And this is consistent with theoretical derivation and experimental results in reference. This model, which is capable of computing the reliability index of resonance failure, provides theoretical basis for the establishment of corresponding rule.

  2. Reliability, validity and responsiveness of a Norwegian version of the Chronic Sinusitis Survey

    Røssberg Edna


    Full Text Available Abstract Background The Chronic Sinusitis Survey (CSS is a valid, disease-specific questionnaire for assessing health status and treatment effectiveness in chronic rhinosinusitis. In the present study, we developed a Norwegian version of the CSS and assessed its psychometric properties. Methods In the pooled data set of 65 patients from a trial of treatment for chronic sinusitis with long-standing symptoms and signs of sinusitis on computed tomography (CT, we assessed the reliability, validity and responsiveness of the CSS. Results Test-retest reliability of the two CSS scales and the total scale ranged 0.87–0.92, while internal consistency reliability ranged 0.31–0.55. CSS subscale scores were associated with other items on sinusitis symptoms, and with the Mental health and Bodily pain scale of the SF-36. There was little association of the CSS scale scores with sinus CT findings. The patients with chronic sinusitis had worse scores on all three CSS scales than a healthy reference population (n = 42 (p Conclusion The Norwegian version of the CSS had acceptable test-retest reliability, but lower internal consistency reliability than the accepted standard criteria. The results support the construct validity of the measure and the sinusitis symptoms subscale and the total scales were responsive to change. This supports the use of the questionnaire in interventions for chronic sinusitis, but points at problems with the internal consistency reliability.

  3. Reliability of power connections

    BRAUNOVIC Milenko


    Despite the use of various preventive maintenance measures, there are still a number of problem areas that can adversely affect system reliability. Also, economical constraints have pushed the designs of power connections closer to the limits allowed by the existing standards. The major parameters influencing the reliability and life of Al-Al and Al-Cu connections are identified. The effectiveness of various palliative measures is determined and the misconceptions about their effectiveness are dealt in detail.

  4. Maintaining consistency in distributed systems

    Birman, Kenneth P.


    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.

  5. Multidisciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  6. Sensitivity Analysis of Component Reliability



    In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.

  7. Dimensionality and reliability of the self-care of heart failure index scales: further evidence from confirmatory factor analysis.

    Barbaranelli, Claudio; Lee, Christopher S; Vellone, Ercole; Riegel, Barbara


    The Self-Care of Heart Failure Index (SCHFI) is used widely, but issues with reliability have been evident. Cronbach alpha coefficient is usually used to assess reliability, but this approach assumes a unidimensional scale. The purpose of this article is to address the dimensionality and internal consistency reliability of the SCHFI. This was a secondary analysis of data from 629 adults with heart failure enrolled in three separate studies conducted in the northeastern and northwestern United States. Following testing for scale dimensionality using confirmatory factor analysis, reliability was tested using coefficient alpha and alternative options. Confirmatory factor analysis demonstrated that: (a) the Self-Care Maintenance Scale has a multidimensional four-factor structure; (b) the Self-Care Management Scale has a two-factor structure, but the primary factors loaded on a common higher-order factor; and (c) the Self-Care Confidence Scale is unidimensional. Reliability estimates for the three scales, obtained with methods compatible with each scale's dimensionality, were adequate or high. The results of the analysis demonstrate that issues of dimensionality and reliability cannot be separated. Appropriate estimates of reliability that are consistent with the dimensionality of the scale must be used. In the case of the SCHFI, coefficient alpha should not be used to assess reliability of the self-care maintenance and the self-care management scales, due to their multidimensionality. When performing psychometric evaluations, we recommend testing dimensionality before assessing reliability, as well using multiple indices of reliability, such as model-based internal consistency, composite reliability, and omega and maximal reliability coefficients. © 2014 Wiley Periodicals, Inc.

  8. Is quantitative electromyography reliable?

    Cecere, F; Ruf, S; Pancherz, H


    The reliability of quantitative electromyography (EMG) of the masticatory muscles was investigated in 14 subjects without any signs or symptoms of temporomandibular disorders. Integrated EMG activity from the anterior temporalis and masseter muscles was recorded bilaterally by means of bipolar surface electrodes during chewing and biting activities. In the first experiment, the influence of electrode relocation was investigated. No influence of electrode relocation on the recorded EMG signal could be detected. In a second experiment, three sessions of EMG recordings during five different chewing and biting activities were performed in the morning (I); 1 hour later without intermediate removal of the electrodes (II); and in the afternoon, using new electrodes (III). The method errors for different time intervals (I-II and I-III errors) for each muscle and each function were calculated. Depending on the time interval between the EMG recordings, the muscles considered, and the function performed, the individual errors ranged from 5% to 63%. The method error increased significantly (P masseter (mean 27.2%) was higher than for the temporalis (mean 20.0%). The largest function error was found during maximal biting in intercuspal position (mean 23.1%). Based on the findings, quantitative electromyography of the masticatory muscles seems to have a limited value in diagnostics and in the evaluation of individual treatment results.

  9. Optimal Reliability-Based Code Calibration

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, M. H.


    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  10. Reliability Coefficients from Two Administrations of the Willoughby Personality Schedule

    Hay, Nancy M.; Stewart, Norman R.


    This study determined internal consistency and test-retest reliability coefficients for the Willoughby Personality Schedule, currently used as an outcome measure in research and in clinical practice. The Hoyt analysis of variance yielded an internal consistency reliability coefficient of .90 on the first testing. The test-retest reliability…

  11. Are paleoclimate model ensembles consistent with the MARGO data synthesis?

    J. C. Hargreaves


    Full Text Available We investigate the consistency of various ensembles of model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day, however, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.

  12. Are paleoclimate model ensembles consistent with the MARGO data synthesis?

    J. C. Hargreaves


    Full Text Available We investigate the consistency of various ensembles of climate model simulations with the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface (MARGO sea surface temperature data synthesis. We discover that while two multi-model ensembles, created through the Paleoclimate Model Intercomparison Projects (PMIP and PMIP2, pass our simple tests of reliability, an ensemble based on parameter variation in a single model does not perform so well. We show that accounting for observational uncertainty in the MARGO database is of prime importance for correctly evaluating the ensembles. Perhaps surprisingly, the inclusion of a coupled dynamical ocean (compared to the use of a slab ocean does not appear to cause a wider spread in the sea surface temperature anomalies, but rather causes systematic changes with more heat transported north in the Atlantic. There is weak evidence that the sea surface temperature data may be more consistent with meridional overturning in the North Atlantic being similar for the LGM and the present day. However, the small size of the PMIP2 ensemble prevents any statistically significant results from being obtained.

  13. Enhanced data consistency of a portable gait measurement system

    Lin, Hsien-I.; Chiang, Y. P.


    A gait measurement system is a useful tool for rehabilitation applications. Such a system is used to conduct gait experiments in large workplaces such as laboratories where gait measurement equipment can be permanently installed. However, a gait measurement system should be portable if it is to be used in clinics or community centers for aged people. In a portable gait measurement system, the workspace is limited and landmarks on a subject may not be visible to the cameras during experiments. Thus, we propose a virtual-marker function to obtain positions of unseen landmarks for maintaining data consistency. This work develops a portable clinical gait measurement system consisting of lightweight motion capture devices, force plates, and a walkway assembled from plywood boards. We evaluated the portable clinic gait system with 11 normal subjects in three consecutive days in a limited experimental space. Results of gait analysis based on the verification of within-day and between-day coefficients of multiple correlations show that the proposed portable gait system is reliable.

  14. Decentralized Consistent Updates in SDN

    Nguyen, Thanh Dang


    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and blackholes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.

  15. The Consistent Vehicle Routing Problem

    Groer, Christopher S [ORNL; Golden, Bruce [University of Maryland; Edward, Wasil [American University


    In the small package shipping industry (as in other industries), companies try to differentiate themselves by providing high levels of customer service. This can be accomplished in several ways, including online tracking of packages, ensuring on-time delivery, and offering residential pickups. Some companies want their drivers to develop relationships with customers on a route and have the same drivers visit the same customers at roughly the same time on each day that the customers need service. These service requirements, together with traditional constraints on vehicle capacity and route length, define a variant of the classical capacitated vehicle routing problem, which we call the consistent VRP (ConVRP). In this paper, we formulate the problem as a mixed-integer program and develop an algorithm to solve the ConVRP that is based on the record-to-record travel algorithm. We compare the performance of our algorithm to the optimal mixed-integer program solutions for a set of small problems and then apply our algorithm to five simulated data sets with 1,000 customers and a real-world data set with more than 3,700 customers. We provide a technique for generating ConVRP benchmark problems from vehicle routing problem instances given in the literature and provide our solutions to these instances. The solutions produced by our algorithm on all problems do a very good job of meeting customer service objectives with routes that have a low total travel time.

  16. Time Discounting and Time Consistency

    N. Dimitri; D.J.N. van Eijck (Jan)


    htmlabstractTime discounting is the phenomenon that a desired result in the future is perceived as less valuable than the same result now. Economic theories can take this psychological fact into account in several ways. In the economic literature the most widely used type of additive time discountin

  17. Time discounting and time consistency

    Dimitri, N.; Eijck, D.J.N. van


    Time discounting is the phenomenon that a desired result in the future is perceived as less valuable than the same result now. Economic theories can take this psychological fact into account in several ways. In the economic literature the most widely used type of additive time discounting is

  18. Reliability and Validity of Web-Based Portfolio Peer Assessment: A Case Study for a Senior High School's Students Taking Computer Course

    Chang, Chi-Cheng; Tseng, Kuo-Hung; Chou, Pao-Nan; Chen, Yi-Hui


    This study examined the reliability and validity of Web-based portfolio peer assessment. Participants were 72 second-grade students from a senior high school taking a computer course. The results indicated that: 1) there was a lack of consistency across various student raters on a portfolio, or inter-rater reliability; 2) two-thirds of the raters…

  19. Structural Reliability of Wind Turbine Blades

    Dimitrov, Nikolay Krasimirov

    by developing new models and standards or carrying out tests The following aspects are covered in detail: ⋅ The probabilistic aspects of ultimate strength of composite laminates are addressed. Laminated plates are considered as a general structural reliability system where each layer in a laminate is a separate...... system component. Methods for solving the system reliability are discussed in an example problem. ⋅ Probabilistic models for fatigue life of laminates and sandwich core are developed and calibrated against measurement data. A modified, nonlinear S-N relationship is formulated where the static strength...... the reliability against several modes of failure in two different structures. This includes reliability against blade-tower collision, and the reliability against ultimate and fatigue failure of a sandwich panel. The results from the reliability analyses are then used for calibrating partial safety factors...

  20. Creating Highly Reliable Accountable Care Organizations.

    Vogus, Timothy J; Singer, Sara J


    Accountable Care Organizations' (ACOs) pursuit of the triple aim of higher quality, lower cost, and improved population health has met with mixed results. To improve the design and implementation of ACOs we look to organizations that manage similarly complex, dynamic, and tightly coupled conditions while sustaining exceptional performance known as high-reliability organizations. We describe the key processes through which organizations achieve reliability, the leadership and organizational practices that enable it, and the role that professionals can play when charged with enacting it. Specifically, we present concrete practices and processes from health care organizations pursuing high-reliability and from early ACOs to illustrate how the triple aim may be met by cultivating mindful organizing, practicing reliability-enhancing leadership, and identifying and supporting reliability professionals. We conclude by proposing a set of research questions to advance the study of ACOs and high-reliability research.

  1. Workplace Bullying Scale: The Study of Validity and Reliability

    Nizamettin Doğar


    Full Text Available The aim of this research is to adapt the Workplace Bullying Scale (Tınaz, Gök & Karatuna, 2013 to Albanian language and to examine its psychometric properties. The research was conducted on 386 person from different sectors of Albania. Results of exploratory and confirmatory factor analysis demonstrated that Albanian scale yielded 2 factors different from original form because of cultural differences. Internal consistency coefficients are,890 -,801 and split-half test reliability coefficients, 864 -,808. Comfirmatory Factor Analysis results change from,40 to,73. Corrected item-total correlations ranged,339 to,672 and according to t-test results differences between each item’s means of upper 27% and lower 27% points were significant. Thus Workplace Bullying Scale can be use as a valid and reliable instrument in social sciences in Albania.

  2. The Verbal Behavior Assessment Scale (VerBAS): Construct Validity, Reliability, and Internal Consistency.

    Duker, Pieter C.


    To assess the psychometric characteristics of the Verbal Behavior Assessment Scale, the 15-item questionnaire was administered to pairs of caregivers of 115 individuals with developmental disabilities. Exploratory factor analysis involving 11 more participants revealed evidence concerning the distinction of three different communicative functions…

  3. Measuring attitude towards Buddhism and Sikhism : internal consistency reliability for two new instruments

    Thanissaro, Phra Nicholas


    This paper describes and discusses the development and empirical properties of two new\\ud 24-item scales – one measuring attitude toward Buddhism and the other measuring attitude\\ud toward Sikhism. The scale is designed to facilitate inter-faith comparisons within the\\ud psychology of religion alongside the well-established Francis Scale of Attitude toward\\ud Christianity. Data were obtained from a multi-religious sample of 369 school pupils aged\\ud between 13 and 15 in London. Application of...


    Тамаргазін, О. А.; Національний авіаційний університет; Власенко, П. О.; Національний авіаційний університет


    Airline's operational structure for Reliability program implementation — engineering division, reliability  division, reliability control division, aircraft maintenance division, quality assurance division — was considered. Airline's Reliability program structure is shown. Using of Reliability program for reducing costs on aircraft maintenance is proposed. Рассмотрена организационная структура авиакомпании по выполнению Программы надежности - инженерный отдел, отделы по надежности авиацио...

  5. Photovoltaic module reliability workshop

    Mrig, L. (ed.)


    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  6. Evaluation of Soft Tissue Landmark Reliability between Manual and Computerized Plotting Methods.

    Kasinathan, Geetha; Kommi, Pradeep B; Kumar, Senthil M; Yashwant, Aniruddh; Arani, Nandakumar; Sabapathy, Senkutvan


    The aim of the study is to evaluate the reliability of soft tissue landmark identification between manual and digital plot-tings in both X and Y axes. A total of 50 pretreatment lateral cephalograms were selected from patients who reported for orthodontic treatment. The digital images of each cephalogram were imported directly into Dolphin software for onscreen digi-talization, while for manual tracing, images were printed using a compatible X-ray printer. After the images were standardized, and 10 commonly used soft tissue landmarks were plotted on each cephalogram by six different professional observers, the values obtained were plotted in X and Y axes. Intraclass correlation coefficient was used to determine the intrarater reliability for repeated landmark plotting obtained by both the methods. The evaluation for reliability of soft tissue landmark plottings in both manual and digital methods after subjecting it to interclass correlation showed a good reliability, which was nearing complete homogeneity in both X and Y axes, except for Y axis of throat point in manual plotting, which showed moderate reliability as a cephalometric variable. Intraclass correlation of soft tissue nasion had a moderate reliability along X axis. Soft tissue pogonion shows moderate reliability in Y axis. Throat point exhibited moderate reliability in X axis. The interclass correlation in X and Y axes shows high reliability in both hard tissue and soft tissue except for throat point in Y axis, when plotted manually. The intraclass correlation is more consistent and highly reliable for soft tissue landmarks and the hard tissue landmark identification is also consistent. The results obtained for manual and digital methods were almost similar, but the digital landmark plotting has an added advantage in archiving, retrieval, transmission, and can be enhanced during plotting of lateral cephalograms. Hence, the digital method of landmark plotting could be preferred for both daily use and

  7. Photovoltaic performance and reliability workshop

    Kroposki, B


    This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime and developing models for PV module lifetime; examining and determining failure and degradation mechanisms in PV modules; combining IEEE/IEC/UL testing procedures; AC module performance and reliability testing; inverter reliability/qualification testing; standardization of utility interconnect requirements for PV systems; need activities to separate variables by testing individual components of PV systems (e.g. cells, modules, batteries, inverters,charge controllers) for individual reliability and then test them in actual system configurations; more results reported from field experience on modules, inverters, batteries, and charge controllers from field deployed PV systems; and system certification and standardized testing for stand-alone and grid-tied systems.

  8. Differential effects of orthographic and phonological consistency in cortex for children with and without reading impairment

    Bolger, Donald J.; Minas, Jennifer; Burman, Douglas D.; Booth, James R.


    One of the central challenges in mastering English is becoming sensitive to consistency from spelling to sound (i.e. phonological consistency) and from sound to spelling (i.e. orthographic consistency). Using functional magnetic resonance imaging (fMRI), we examined the neural correlates of consistency in 9-15-year-old Normal and Impaired Readers during a rhyming task in the visual modality. In line with our previous study, for Normal Readers, lower phonological and orthographic consistency were associated with greater activation in several regions including bilateral inferior/middle frontal gyri, bilateral anterior cingulate cortex as well as left fusiform gyrus. Impaired Readers activated only bilateral anterior cingulate cortex in response to decreasing consistency. Group comparisons revealed that, relative to Impaired Readers, Normal Readers exhibited a larger response in this network for lower phonological consistency whereas orthographic consistency differences were limited. Lastly, brain-behavior correlations revealed a significant relationship between skill (i.e. Phonological Awareness and non-word decoding) and cortical consistency effects for Impaired Readers in left inferior/middle frontal gyri and left fusiform gyrus. Impaired Readers with higher skill showed greater activation for higher consistency. This relationship was reliably different from that of Normal Readers in which higher skill was associated with greater activation for lower consistency. According to single-route or connectionist models, these results suggest that Impaired Readers with higher skill devote neural resources to representing the mapping between orthography and phonology for higher consistency words, and therefore do not robustly activate this network for lower consistency words. PMID:18725239

  9. Reliability Centered Maintenance - Methodologies

    Kammerer, Catherine C.


    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  10. Consistent ranking of volatility models

    Hansen, Peter Reinhard; Lunde, Asger


    result in an inferior model being chosen as "best" with a probability that converges to one as the sample size increases. We document the practical relevance of this problem in an empirical application and by simulation experiments. Our results provide an additional argument for using the realized...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable.......We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...

  11. Gearbox Reliability Collaborative Update (Presentation)

    Sheng, S.; Keller, J.; Glinsky, C.


    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  12. The Consistency of Majority Rule

    D. Porello


    We propose an analysis of the impossibility results in judgement aggregation by means of a proof-theoretical approach to collective rationality. In particular, we use linear logic in order to analyse the group inconsistencies and to show possible ways to circumvent them.

  13. Consistent feeding positions of great tit parents

    Lessells, C.M.; Poelman, E.H.; Mateman, A.C.; Cassey, P.


    When parent birds arrive at the nest to provision their young, their position on the nest rim may influence which chick or chicks are fed. As a result, the consistency of feeding positions of the individual parents, and the difference in position between the parents, may affect how equitably food is

  14. System Reliability Analysis: Foundations.


    performance formulas for systems subject to pre- ventive maintenance are given. V * ~, , 9 D -2 SYSTEM RELIABILITY ANALYSIS: FOUNDATIONS Richard E...reliability in this case is V P{s can communicate with the terminal t = h(p) Sp2(((((p p)p) p)p)gp) + p(l -p)(((pL p)p)(p 2 JLp)) + p(l -p)((p(p p...For undirected networks, the basic reference is A. Satyanarayana and Kevin Wood (1982). For directed networks, the basic reference is Avinash

  15. Reliability of Arctic offshore installations

    Bercha, F.G. [Bercha Group, Calgary, AB (Canada); Gudmestad, O.T. [Stavanger Univ., Stavanger (Norway)]|[Statoil, Stavanger (Norway)]|[Norwegian Univ. of Technology, Stavanger (Norway); Foschi, R. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Civil Engineering; Sliggers, F. [Shell International Exploration and Production, Rijswijk (Netherlands); Nikitina, N. [VNIIG, St. Petersburg (Russian Federation); Nevel, D.


    Life threatening and fatal failures of offshore structures can be attributed to a broad range of causes such as fires and explosions, buoyancy losses, and structural overloads. This paper addressed the different severities of failure types, categorized as catastrophic failure, local failure or serviceability failure. Offshore tragedies were also highlighted, namely the failures of P-36, the Ocean Ranger, the Piper Alpha, and the Alexander Kieland which all resulted in losses of human life. P-36 and the Ocean Ranger both failed ultimately due to a loss of buoyancy. The Piper Alpha was destroyed by a natural gas fire, while the Alexander Kieland failed due to fatigue induced structural failure. The mode of failure was described as being the specific way in which a failure occurs from a given cause. Current reliability measures in the context of offshore installations only consider the limited number of causes such as environmental loads. However, it was emphasized that a realistic value of the catastrophic failure probability should consider all credible causes of failure. This paper presented a general method for evaluating all credible causes of failure of an installation. The approach to calculating integrated reliability involves the use of network methods such as fault trees to combine the probabilities of all factors that can cause a catastrophic failure, as well as those which can cause a local failure with the potential to escalate to a catastrophic failure. This paper also proposed a protocol for setting credible reliability targets such as the consideration of life safety targets and escape, evacuation, and rescue (EER) success probabilities. A set of realistic reliability targets for both catastrophic and local failures for representative safety and consequence categories associated with offshore installations was also presented. The reliability targets were expressed as maximum average annual failure probabilities. The method for converting these annual


    Adrian Stere PARIS


    Full Text Available The mechanical reliability uses many statistical tools to find the factors of influence and their levels inthe optimization of parameters on the basis of experimental data. Design of Experiments (DOE techniquesenables designers to determine simultaneously the individual and interactive effects of many factors that couldaffect the output results in any design. The state-of-the-art in the domain implies extended use of software and abasic mathematical knowledge, mainly applying ANOVA and the regression analysis of experimental data.

  17. Mission Reliability Estimation for Repairable Robot Teams

    Stephen B. Stancliff


    Full Text Available Many of the most promising applications for mobile robots require very high reliability. The current generation of mobile robots is, for the most part, highly unreliable. The few mobile robots that currently demonstrate high reliability achieve this reliability at a high financial cost. In order for mobile robots to be more widely used, it will be necessary to find ways to provide high mission reliability at lower cost. Comparing alternative design paradigms in a principled way requires methods for comparing the reliability of different robot and robot team configurations. In this paper, we present the first principled quantitative method for performing mission reliability estimation for mobile robot teams. We also apply this method to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Using conservative estimates of the cost-reliability relationship, our results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares.

  18. Quantifying the consistency of scientific databases

    Šubelj, Lovro; Boshkoska, Biljana Mileva; Kastrin, Andrej; Levnajić, Zoran


    Science is a social process with far-reaching impact on our modern society. In the recent years, for the first time we are able to scientifically study the science itself. This is enabled by massive amounts of data on scientific publications that is increasingly becoming available. The data is contained in several databases such as Web of Science or PubMed, maintained by various public and private entities. Unfortunately, these databases are not always consistent, which considerably hinders this study. Relying on the powerful framework of complex networks, we conduct a systematic analysis of the consistency among six major scientific databases. We found that identifying a single "best" database is far from easy. Nevertheless, our results indicate appreciable differences in mutual consistency of different databases, which we interpret as recipes for future bibliometric studies.

  19. Personalized recommendation based on unbiased consistence

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao


    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  20. Reliability based structural design

    Vrouwenvelder, A.C.W.M.


    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A ke

  1. Reliability based structural design

    Vrouwenvelder, A.C.W.M.


    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A ke

  2. The value of reliability

    Fosgerau, Mogens; Karlström, Anders


    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...

  3. Parametric Mass Reliability Study

    Holt, James P.


    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  4. Avionics Design for Reliability


    Consultant P.O. Box 181, Hazelwood. Missouri 63042, U.S.A. soup ""•.• • CONTENTS Page LIST OF SPEAKERS iii INTRODUCTION AND OVERVIEW-RELIABILITY UNDER... primordial , d’autant plus quo dans co cam ia procg- dure do st~lection en fiabilitg eat assez peu efficaco. La ripartition des pannes suit

  5. Wind Energy - How Reliable.


    The reliability of a wind energy system depends on the size of the propeller and the size of the back-up energy storage. Design of the optimum system...speed incidents which generate a significant part of the wind energy . A nomogram is presented, based on some continuous wind speed measurements

  6. The reliability horizon

    Visser, M


    The ``reliability horizon'' for semi-classical quantum gravity quantifies the extent to which we should trust semi-classical quantum gravity, and gives a handle on just where the ``Planck regime'' resides. The key obstruction to pushing semi-classical quantum gravity into the Planck regime is often the existence of large metric fluctuations, rather than a large back-reaction.

  7. Reliability of semiology description.

    Heo, Jae-Hyeok; Kim, Dong Wook; Lee, Seo-Young; Cho, Jinwhan; Lee, Sang-Kun; Nam, Hyunwoo


    Seizure semiology is important for classifying patients' epilepsy. Physicians usually get most of the seizure information from observers though there have been few reports on the reliability of the observers' description. This study aims at determining the reliability of observers' description of the semiology. We included 92 patients who had their habitual seizures recorded during video-EEG monitoring. We compared the semiology described by the observers with that recorded on the videotape, and reviewed which characteristics of the observers affected the reliability of their reported data. The classification of seizures and the individual components of the semiology based only on the observer-description was somewhat discordant compared with the findings from the videotape (correct classification, 85%). The descriptions of some ictal behaviors such as oroalimentary automatism, tonic/dystonic limb posturing, and head versions were relatively accurate, but those of motionless staring and hand automatism were less accurate. The specified directions by the observers were relatively correct. The accuracy of the description was related to the educational level of the observers. Much of the information described by well-educated observers is reliable. However, every physician should keep in mind the limitations of this information and use this information cautiously.

  8. High reliability organizations

    Gallis, R.; Zwetsloot, G.I.J.M.


    High Reliability Organizations (HRO’s) are organizations that constantly face serious and complex (safety) risks yet succeed in realising an excellent safety performance. In such situations acceptable levels of safety cannot be achieved by traditional safety management only. HRO’s manage safety

  9. Entropy-based consistent model driven architecture

    Niepostyn, Stanisław Jerzy


    A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.

  10. State space consistency and differentiability

    Serakos, Demetrios


    By investigating the properties of the natural state, this book presents an analysis of input-output systems with regard to the mathematical concept of state. The state of a system condenses the effects of past inputs to the system in a useful manner. This monograph emphasizes two main properties of the natural state; the first has to do with the possibility of determining the input-output system from its natural state set and the second deals with differentiability properties involving the natural state inherited from the input-output system, including differentiability of the natural state and natural state trajectories. The results presented in this title aid in modeling physical systems since system identification from a state set holds in most models. Researchers and engineers working in electrical, aerospace, mechanical, and chemical fields along with applied mathematicians working in systems or differential equations will find this title useful due to its rigorous mathematics.  

  11. A consistent flow of entropy

    Ansari, Mohammad H


    A common approach to evaluate entropy in quantum systems is to solve a master-Bloch equation to determine density matrix and substitute it in entropy definition. However, this method has been recently understood to lack many energy correlators. The new correlators make entropy evaluation to be different from the substitution method described above. The reason for such complexity lies in the nonlinearity of entropy. In this paper we present a pedagogical approach to evaluate the new correlators and explain their contribution in the analysis. We show that the inherent nonlinearity in entropy makes the second law of thermodynamics to carry new terms associated to the new correlators. Our results show important new remarks on quantum black holes. Our formalism reveals that the notion of degeneracy of states at the event horizon makes an indispensable deviation from black hole entropy in the leading order.

  12. Covariate-free and Covariate-dependent Reliability.

    Bentler, Peter M


    Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.

  13. The Riso-Hudson Enneagram Type Indicator: Estimates of Reliability and Validity

    Newgent, Rebecca A.; Parr, Patricia E.; Newman, Isadore; Higgins, Kristin K.


    This investigation was conducted to estimate the reliability and validity of scores on the Riso-Hudson Enneagram Type Indicator (D. R. Riso & R. Hudson, 1999a). Results of 287 participants were analyzed. Alpha suggests an adequate degree of internal consistency. Evidence provides mixed support for construct validity using correlational and…

  14. A study on reliability of power customer in distribution network

    Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin


    The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.

  15. Reliability-Centric High-Level Synthesis

    Tosun, S; Arvas, E; Kandemir, M; Xie, Yuan


    Importance of addressing soft errors in both safety critical applications and commercial consumer products is increasing, mainly due to ever shrinking geometries, higher-density circuits, and employment of power-saving techniques such as voltage scaling and component shut-down. As a result, it is becoming necessary to treat reliability as a first-class citizen in system design. In particular, reliability decisions taken early in system design can have significant benefits in terms of design quality. Motivated by this observation, this paper presents a reliability-centric high-level synthesis approach that addresses the soft error problem. The proposed approach tries to maximize reliability of the design while observing the bounds on area and performance, and makes use of our reliability characterization of hardware components such as adders and multipliers. We implemented the proposed approach, performed experiments with several designs, and compared the results with those obtained by a prior proposal.

  16. Consistency of detrended fluctuation analysis

    Løvsletten, O.


    The scaling function F (s ) in detrended fluctuation analysis (DFA) scales as F (s ) ˜sH for stochastic processes with Hurst exponent H . This scaling law is proven for stationary stochastic processes with 0 law) autocorrelation function (ACF) scales as ˜s1 /2 . It is also demonstrated that the fluctuation function in DFA is equal in expectation to (i) a weighted sum of the ACF and (ii) a weighted sum of the second-order structure function. These results enable us to compute the exact finite-size bias for signals that are scaling and to employ DFA in a meaningful sense for signals that do not exhibit power-law statistics. The usefulness is illustrated by examples where it is demonstrated that a previous suggested modified DFA will increase the bias for signals with Hurst exponents 1

  17. Component Reliability Assessment of Offshore Jacket Platforms

    V.J. Kurian


    Full Text Available Oil and gas industry is one of the most important industries contributing to the Malaysian economy. To extract hydrocarbons, various types of production platforms have been developed. Fixed jacket platform is the earliest type of production structure, widely installed in Malaysia’s shallow and intermediate waters. To date, more than 60% of these jacket platforms have operated exceeding their initial design life, thus making the re-evaluation and reassessment necessary for these platforms to continue to be put in service. In normal engineering practice, system reliability of a structure is evaluated as its safety parameter. This method is however, much complicated and time consuming. Assessing component's reliability can be an alternative approach to provide assurance about a structure’s condition in an early stage. Design codes such as the Working Stress Design (WSD and the Load and Resistance Factor Design (LRFD are well established for the component-level assessment. In reliability analysis, failure function, which consists of strength and load, is used to define the failure event. If the load acting exceeds the capacity of a structure, the structure will fail. Calculation of stress utilization ratio as given in the design codes is able to predict the reliability of a member and to estimate the extent to which a member is being utilised. The basic idea of this ratio is that if it is more than one, the member has failed and vice versa. Stress utilization ratio is a ratio of applied stress, which is the output reaction of environmental loadings acting on the structural member, to the design strength that comes from the member’s geometric and material properties. Adopting this ratio as the failure event, the reliability of each component is found. This study reviews and discusses the reliability for selected members of three Malaysian offshore jacket platforms. First Order Reliability Method (FORM was used to generate reliability index and

  18. Reliability in the utility computing era: Towards reliable Fog computing

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.


    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  19. Fatigue reliability for LNG carrier

    Xiao Taoyun; Zhang Qin; Jin Wulei; Xu Shuai


    The procedure of reliability-based fatigue analysis of liquefied natural gas (LNG) carrier of membrane type under wave loads is presented. The stress responses of the hotspots in regular waves with different wave heading angles and wave lengths are evaluated by global ship finite element method (FEM). Based on the probabilistic distribution function of hotspots' short-term stress-range using spectral-based analysis, Weibull distribution is adopted and discussed for fitting the long-term probabilistic distribution of stress-range. Based on linear cumulative damage theory, fatigue damage is characterized by an S-N relationship, and limit state function is established. Structural fatigue damage behavior of several typical hotspots of LNG middle ship section is clarified and reliability analysis is performed. It is believed that the presented results and conclusions can be of use in calibration for practical design and initial fatigue safety evaluation for membrane type LNG carrier.

  20. Reliability in individual monitoring service.

    Mod Ali, N


    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country.

  1. Reliability methods in nuclear power plant ageing management

    Simola, K. [VTT Automation, Espoo (Finland). Industrial Automation


    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  2. Consistent Predictions of Future Forest Mortality

    McDowell, N. G.


    We examined empirical and model based estimates of current and future forest mortality of conifers in the northern hemisphere. Consistent water potential thresholds were found that resulted in mortality of our case study species, pinon pine and one-seed juniper. Extending these results with IPCC climate scenarios suggests that most existing trees in this region (SW USA) will be dead by 2050. Further, independent estimates of future mortality for the entire coniferous biome suggest widespread mortality by 2100. The validity and assumptions and implications of these results are discussed.

  3. Validity and Reliability of pre-internship Objective Structured Clinical Examination in Shiraz Medical School



    Full Text Available Introduction: Objective Structured Clinical Examination (OSCE is one of the most appropriate methods for assessment of clinical skills.Validity and reliability assurance is a mandatory factor for any assessment tool. In Shiraz University of Medical Sciences, medical students’ clinical competences are evaluated by a pre-internship OSCE. This study is designed to examine the validity and reliability of this exam. Validity is the extent to which the test measures what it intends to measure. Reliability refers to the accuracy of measurement and the consistency of test results. Methods: Content validity was evaluated by expert opinion about blueprinting and station checklists. To determine the construct validity, station scores correlation with the total OSCE score and inter station correlations were calculated. The inter examiner reliability was assessed by coefficient of correlation. Results: Content validity was established by alignment between the curriculum and the blueprint using expert opinion. Correlation of the station scores with the total OSCE score were positive and statistically significant in all stations except the 16th station (suturing. Inter examiner reliability coefficients of correlations ranged 0.33 – 0.99, with an average of 0.83. Conclusions: Our findings support the assumption that the pre-internship OSCE is valid, reliable and suitable to assess students’ clinical competence. Validity and reliability studies should be performed for all new assessment tools, particularly in high-stakes assessments.



    Recently, considerable emphasis has been laid to the reliability-based optimization model for water distribution systems. But considerable computational effort is needed to determine the reliability-based optimal design of large networks, even of mid-sized networks. In this paper, a new methodology is presented for the reliability analysis for water distribution systems. This methodology consists of two procedures. The first is that the optimal design is constrained only by the pressure heads at demand nodes, done in GRG2. Because the reliability constrains are removed from the optimal problem, a number of simulations do not need to be conducted, so the computer time is greatly decreased. Then, the second procedure is a linear optimal search procedure. In this linear procedure, the optimal results obtained by GRG2 are adjusted by the reliability constrains. The results are a group of commercial diameters of pipes and the constraints of pressure heads and reliability at nodes are satisfied. Therefore, the computer burden is significantly decreased, and the reliability-based optimization is of more practical use.

  5. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    Garg, Vikram V


    Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.

  6. Human Reliability Program Workshop

    Landers, John; Rogers, Erin; Gerke, Gretchen


    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  7. Accelerator reliability workshop

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D


    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  8. Improving Power Converter Reliability

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon


    The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  9. Power electronics reliability.

    Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley


    The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

  10. Influence of Sensor Ingestion Timing on Consistency of Temperature Measures


    Copyright @ 200 by the American College of Sports Medicine. Unauthorized reproduction of this article is prohibited.9 Influence of Sensor Ingestion ... Ingestion Timing on Consistency of Temperature Measures. Med. Sci. Sports Exerc., Vol. 41, No. 3, pp. 597–602, 2009. Purpose: The validity and the...reliability of using intestinal temperature (Tint) via ingestible temperature sensors (ITS) to measure core body temperature have been demonstrated. However

  11. Self-consistency in Capital Markets

    Benbrahim, Hamid


    Capital Markets are considered, at least in theory, information engines whereby traders contribute to price formation with their diverse perspectives. Regardless whether one believes in efficient market theory on not, actions by individual traders influence prices of securities, which in turn influence actions by other traders. This influence is exerted through a number of mechanisms including portfolio balancing, margin maintenance, trend following, and sentiment. As a result market behaviors emerge from a number of mechanisms ranging from self-consistency due to wisdom of the crowds and self-fulfilling prophecies, to more chaotic behavior resulting from dynamics similar to the three body system, namely the interplay between equities, options, and futures. This talk will address questions and findings regarding the search for self-consistency in capital markets.

  12. ATLAS reliability analysis

    Bartsch, R.R.


    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  13. Reliability of Circumplex Axes

    Micha Strack


    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  14. Consistent Linearized Gravity in Brane Backgrounds

    Aref'eva, I Ya; Mück, W; Viswanathan, K S; Volovich, I V


    A globally consistent treatment of linearized gravity in the Randall-Sundrum background with matter on the brane is formulated. Using a novel gauge, in which the transverse components of the metric are non-vanishing, the brane is kept straight. We analyze the gauge symmetries and identify the physical degrees of freedom of gravity. Our results underline the necessity for non-gravitational confinement of matter to the brane.

  15. Software reliability experiments data analysis and investigation

    Walker, J. Leslie; Caglayan, Alper K.


    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  16. Validity and reliability of the novel thyroid-specific quality of life questionnaire, ThyPRO

    Watt, Torquil; Hegedüs, Laszlo; Groenvold, Mogens;


    Background Appropriate scale validity and internal consistency reliability have recently been documented for the new thyroid-specific quality of life (QoL) patient-reported outcome (PRO) measure for benign thyroid disorders, the ThyPRO. However, before clinical use, clinical validity and test......-retest reliability should be evaluated. Aim To investigate clinical ('known-groups') validity and test-retest reliability of the Danish version of the ThyPRO. Methods For each of the 13 ThyPRO scales, we defined groups expected to have high versus low scores ('known-groups'). The clinical validity (known......-groups validity) was evaluated by whether the ThyPRO scales could detect expected differences in a cross-sectional study of 907 thyroid patients. Test-retest reliability was evaluated by intra-class correlations of two responses to the ThyPRO 2 weeks apart in a subsample of 87 stable patients. Results On all 13...

  17. A Study on Electrical Reliability Criterion on Through Silicon Via Packaging.

    Lwo, Ben-Je; Tseng, Kuo-Hao; Tseng, Kun-Fu


    Three-dimensional (3D) structure with through silicon via (TSV) technology is emerging as a key issue in microelectronic packaging industry, and electrical reliability has become one of the main technical subjects for the TSV designs. However, criteria used for TSV reliability tests have not been consistent in the literature, so that the criterion itself becomes a technical argument. To this end, this paper first performed several different reliability tests on the testing packaging with TSV chains, then statistically analyzed the experimental data with different failure criteria on resistance increasing, and finally constructed the Weibull failure curves with parameter extractions. After comparing the results, it is suggested that using different criteria may lead to the same failure mode on Weibull analyses, and 65% of failed devices are recommended as a suitable termination for reliability tests.

  18. Reliability of Tethered Swimming Evaluation in Age Group Swimmers

    Amaro, Nuno; Marinho, Daniel A; Batalha, Nuno; Marques, Mário C; Morouço, Pedro


    The aim of the present study was to examine the reliability of tethered swimming in the evaluation of age group swimmers. The sample was composed of 8 male national level swimmers with at least 4 years of experience in competitive swimming. Each swimmer performed two 30 second maximal intensity tethered swimming tests, on separate days. Individual force-time curves were registered to assess maximum force, mean force and the mean impulse of force. Both consistency and reliability were very strong, with Cronbach’s Alpha values ranging from 0.970 to 0.995. All the applied metrics presented a very high agreement between tests, with the mean impulse of force presenting the highest. These results indicate that tethered swimming can be used to evaluate age group swimmers. Furthermore, better comprehension of the swimmers ability to effectively exert force in the water can be obtained using the impulse of force. PMID:25114742

  19. Reliability of Tethered Swimming Evaluation in Age Group Swimmers

    Amaro Nuno


    Full Text Available The aim of the present study was to examine the reliability of tethered swimming in the evaluation of age group swimmers. The sample was composed of 8 male national level swimmers with at least 4 years of experience in competitive swimming. Each swimmer performed two 30 second maximal intensity tethered swimming tests, on separate days. Individual force-time curves were registered to assess maximum force, mean force and the mean impulse of force. Both consistency and reliability were very strong, with Cronbach's Alpha values ranging from 0.970 to 0.995. All the applied metrics presented a very high agreement between tests, with the mean impulse of force presenting the highest. These results indicate that tethered swimming can be used to evaluate age group swimmers. Furthermore, better comprehension of the swimmers ability to effectively exert force in the water can be obtained using the impulse of force.

  20. Reliability Analysis of Sensor Networks

    JIN Yan; YANG Xiao-zong; WANG Ling


    To Integrate the capacity of sensing, communication, computing, and actuating, one of the compelling technological advances of these years has been the appearance of distributed wireless sensor network (DSN) for information gathering tasks. In order to save the energy, multi-hop routing between the sensor nodes and the sink node is necessary because of limited resource. In addition, the unpredictable conditional factors make the sensor nodes unreliable. In this paper, the reliability of routing designed for sensor network and some dependability issues of DSN, such as MTTF(mean time to failure) and the probability of connectivity between the sensor nodes and the sink node are analyzed.Unfortunately, we could not obtain the accurate result for the arbitrary network topology, which is # P-hard problem.And the reliability analysis of restricted topologies clustering-based is given. The method proposed in this paper will show us a constructive idea about how to place energyconstrained sensor nodes in the network efficiently from the prospective of reliability.

  1. Reliability engineering theory and practice

    Birolini, Alessandro


    This book shows how to build in and assess reliability, availability, maintainability, and safety (RAMS) of components, equipment, and systems. It presents the state of the art of reliability (RAMS) engineering, in theory & practice, and is based on over 30 years author's experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The book structure allows rapid access to practical results. Methods & tools are given in a way that they can be tailored to cover different RAMS requirement levels. Thanks to Appendices A6 - A8 the book is mathematically self-contained, and can be used as a textbook or as a desktop reference with a large number of tables (60), figures (210), and examples / exercises^ 10,000 per year since 2013) were the motivation for this final edition, the 13th since 1985, including German editions. Extended and carefully reviewed to improve accuracy, it represents the continuous improvement effort to satisfy reader's needs and confidenc...

  2. Consistency and variability in functional localisers

    Duncan, Keith J.; Pattamadilok, Chotiga; Knierim, Iris; Devlin, Joseph T.


    A critical assumption underlying the use of functional localiser scans is that the voxels identified as the functional region-of-interest (fROI) are essentially the same as those activated by the main experimental manipulation. Intra-subject variability in the location of the fROI violates this assumption, reducing the sensitivity of the analysis and biasing the results. Here we investigated consistency and variability in fROIs in a set of 45 volunteers. They performed two functional localiser scans to identify word- and object-sensitive regions of ventral and lateral occipito-temporal cortex, respectively. In the main analyses, fROIs were defined as the category-selective voxels in each region and consistency was measured as the spatial overlap between scans. Consistency was greatest when minimally selective thresholds were used to define “active” voxels (p < 0.05 uncorrected), revealing that approximately 65% of the voxels were commonly activated by both scans. In contrast, highly selective thresholds (p < 10− 4 to 10− 6) yielded the lowest consistency values with less than 25% overlap of the voxels active in both scans. In other words, intra-subject variability was surprisingly high, with between one third and three quarters of the voxels in a given fROI not corresponding to those activated in the main task. This level of variability stands in striking contrast to the consistency seen in retinotopically-defined areas and has important implications for designing robust but efficient functional localiser scans. PMID:19289173

  3. Reliability of photovoltaic systems: A field report

    Thomas, M. G.; Fuentes, M. K.; Lashway, C.; Black, B. D.

    Performance studies and field measurements of photovoltaic systems indicate a 1 to 2% per year degradation in array energy production. The cause for much of the degradation has been identified as soiling, failed modules, and failures in interconnections. System performance evaluation continues to be complicated by the poor reliability of some power conditioning hardware that has greatly diminished the system availability and by inconsistent field ratings. Nevertheless, the current system reliability is consistent with degradation of less than 10% in 5 years and with estimates of less than 10% per year of the energy value for O and M.

  4. Reliability of photovoltaic systems - A field report

    Thomas, M. G.; Fuentes, M. K.; Lashway, C.; Black, B. D.

    Performance studies and field measurements of photovoltaic systems indicate a 1-2-percent/yr degradation in array energy production. The cause for much of the degradation has been identified as soiling, failed modules, and failures in interconnections. System performance evaluation continues to be complicated by the poor reliability of some power conditioning hardware (which greatly diminished system availability) and by inconsistent field ratings. Nevertheless, the current system reliability is consistent with degradation of less than 10 percent in 5 years and with estimates of less than 10 percent/yr of the energy value for O&M.

  5. Improving analytical tomographic reconstructions through consistency conditions

    Arcadu, Filippo; Stampanoni, Marco; Marone, Federica


    This work introduces and characterizes a fast parameterless filter based on the Helgason-Ludwig consistency conditions, used to improve the accuracy of analytical reconstructions of tomographic undersampled datasets. The filter, acting in the Radon domain, extrapolates intermediate projections between those existing. The resulting sinogram, doubled in views, is then reconstructed by a standard analytical method. Experiments with simulated data prove that the peak-signal-to-noise ratio of the results computed by filtered backprojection is improved up to 5-6 dB, if the filter is used prior to reconstruction.

  6. Test-retest reliability of the 40 Hz EEG auditory steady-state response.

    Kristina L McFadden

    Full Text Available Auditory evoked steady-state responses are increasingly being used as a marker of brain function and dysfunction in various neuropsychiatric disorders, but research investigating the test-retest reliability of this response is lacking. The purpose of this study was to assess the consistency of the auditory steady-state response (ASSR across sessions. Furthermore, the current study aimed to investigate how the reliability of the ASSR is impacted by stimulus parameters and analysis method employed. The consistency of this response across two sessions spaced approximately 1 week apart was measured in nineteen healthy adults using electroencephalography (EEG. The ASSR was entrained by both 40 Hz amplitude-modulated white noise and click train stimuli. Correlations between sessions were assessed with two separate analytical techniques: a channel-level analysis across the whole-head array and b signal-space projection from auditory dipoles. Overall, the ASSR was significantly correlated between sessions 1 and 2 (p<0.05, multiple comparison corrected, suggesting adequate test-retest reliability of this response. The current study also suggests that measures of inter-trial phase coherence may be more reliable between sessions than measures of evoked power. Results were similar between the two analysis methods, but reliability varied depending on the presented stimulus, with click train stimuli producing more consistent responses than white noise stimuli.

  7. Individual consistency and flexibility in human social information use.

    Toelch, Ulf; Bruce, Matthew J; Newson, Lesley; Richerson, Peter J; Reader, Simon M


    Copying others appears to be a cost-effective way of obtaining adaptive information, particularly when flexibly employed. However, adult humans differ considerably in their propensity to use information from others, even when this 'social information' is beneficial, raising the possibility that stable individual differences constrain flexibility in social information use. We used two dissimilar decision-making computer games to investigate whether individuals flexibly adjusted their use of social information to current conditions or whether they valued social information similarly in both games. Participants also completed established personality questionnaires. We found that participants demonstrated considerable flexibility, adjusting social information use to current conditions. In particular, individuals employed a 'copy-when-uncertain' social learning strategy, supporting a core, but untested, assumption of influential theoretical models of cultural transmission. Moreover, participants adjusted the amount invested in their decision based on the perceived reliability of personally gathered information combined with the available social information. However, despite this strategic flexibility, participants also exhibited consistent individual differences in their propensities to use and value social information. Moreover, individuals who favoured social information self-reported as more collectivist than others. We discuss the implications of our results for social information use and cultural transmission.


    Jantzen, C; Ned Bibler, N


    The Product Consistency Test (PCT), American Society for Testing Materials (ASTM) Standard C1285, is currently used world wide for testing glass and glass-ceramic waste forms for high level waste (HLW), low level waste (LLW), and hazardous wastes. Development of the PCT was initiated in 1986 because HLW glass waste forms required extensive characterization before actual production began and required continued characterization during production ({ge}25 years). Non-radioactive startup was in 1994 and radioactive startup was in 1996. The PCT underwent extensive development from 1986-1994 and became an ASTM consensus standard in 1994. During the extensive laboratory testing and inter- and intra-laboratory round robins using non-radioactive and radioactive glasses, the PCT was shown to be very reproducible, to yield reliable results rapidly, to distinguish between glasses of different durability and homogeneity, and to easily be performed in shielded cell facilities with radioactive samples. In 1997, the scope was broadened to include hazardous and mixed (radioactive and hazardous) waste glasses. In 2002, the scope was broadened to include glass-ceramic waste forms which are currently being recommended for second generation nuclear wastes yet to be generated in the nuclear renaissance. Since the PCT has proven useful for glass-ceramics with up to 75% ceramic component and has been used to evaluate Pu ceramic waste forms, the use of this test for other ceramic/mineral waste forms such as geopolymers, hydroceramics, and fluidized bed steam reformer mineralized product is under investigation.

  9. System Reliability for LED-Based Products

    Davis, J Lynn; Mills, Karmann; Lamvik, Michael; Yaga, Robert; Shepherd, Sarah D; Bittle, James; Baldasaro, Nick; Solano, Eric; Bobashev, Georgiy; Johnson, Cortina; Evans, Amy


    Results from accelerated life tests (ALT) on mass-produced commercially available 6” downlights are reported along with results from commercial LEDs. The luminaires capture many of the design features found in modern luminaires. In general, a systems perspective is required to understand the reliability of these devices since LED failure is rare. In contrast, components such as drivers, lenses, and reflector are more likely to impact luminaire reliability than LEDs.

  10. Poor consistency in evaluating South African adults with neurogenic dysphagia

    Mckinley Andrews


    Full Text Available Background: Speech-language therapists are specifically trained in clinically evaluating swallowing in adults with acute stroke. Incidence of dysphagia following acute stroke is high in South Africa, and health implications can be fatal, making optimal management of this patient population crucial. However, despite training and guidelines for best practice in clinically evaluating swallowing in adults with acute stroke, there are low levels of consistency in these practice patterns.Objective: The aim was to explore the clinical practice activities of speech-language therapists in the clinical evaluation of swallowing in adults with acute stroke. Practice activities reviewed included the use and consistency of clinical components and resources utilised. Clinical components were the individual elements evaluated in the clinical evaluation of swallowing (e.g. lip seal, vocal quality, etc.Methods: The questionnaire used in the study was replicated and adapted from a study increasing content- and criterion-related validity. A narrative literature review determined what practice patterns existed in the clinical evaluation of swallowing in adults. A pilot study was conducted to increase validity and reliability. Purposive sampling was used by sending a self-administered, electronic questionnaire to members of the South African Speech-Language-Hearing Association. Thirty-eight participants took part in the study. Descriptive statistics were used to analyse the data and the small qualitative component was subjected to textual analysis.Results: There was high frequency of use of 41% of the clinical components in more than 90% of participants (n = 38. Less than 50% of participants frequently assessed sensory function and gag reflex and used pulse oximetry, cervical auscultation and indirect laryngoscopy. Approximately a third of participants showed high (30.8%, moderate (35.9% and poor (33.3% consistency of practice each. Nurses, food and liquids and

  11. Foundations of consistent couple stress theory

    Hadjesfandiari, Ali R


    In this paper, we examine the recently developed skew-symmetric couple stress theory and demonstrate its inner consistency, natural simplicity and fundamental connection to classical mechanics. This hopefully will help the scientific community to overcome any ambiguity and skepticism about this theory, especially the validity of the skew-symmetric character of the couple-stress tensor. We demonstrate that in a consistent continuum mechanics, the response of infinitesimal elements of matter at each point decomposes naturally into a rigid body portion, plus the relative translation and rotation of these elements at adjacent points of the continuum. This relative translation and rotation captures the deformation in terms of stretches and curvatures, respectively. As a result, the continuous displacement field and its corresponding rotation field are the primary variables, which remarkably is in complete alignment with rigid body mechanics, thus providing a unifying basis. For further clarification, we also exami...

  12. Robust acceleration of self consistent field calculations for density functional theory.

    Baarman, K; Eirola, T; Havu, V


    We show that the type 2 Broyden secant method is a robust general purpose mixer for self consistent field problems in density functional theory. The Broyden method gives reliable convergence for a large class of problems and parameter choices. We directly mix the approximation of the electronic density to provide a basis independent mixing scheme. In particular, we show that a single set of parameters can be chosen that give good results for a large range of problems. We also introduce a spin transformation to simplify treatment of spin polarized problems. The spin transformation allows us to treat these systems with the same formalism as regular fixed point iterations.

  13. CR reliability testing

    Honeyman-Buck, Janice C.; Rill, Lynn; Frost, Meryll M.; Staab, Edward V.


    The purpose of this work was to develop a method for systematically testing the reliability of a CR system under realistic daily loads in a non-clinical environment prior to its clinical adoption. Once digital imaging replaces film, it will be very difficult to revert back should the digital system become unreliable. Prior to the beginning of the test, a formal evaluation was performed to set the benchmarks for performance and functionality. A formal protocol was established that included all the 62 imaging plates in the inventory for each 24-hour period in the study. Imaging plates were exposed using different combinations of collimation, orientation, and SID. Anthropomorphic phantoms were used to acquire images of different sizes. Each combination was chosen randomly to simulate the differences that could occur in clinical practice. The tests were performed over a wide range of times with batches of plates processed to simulate the temporal constraints required by the nature of portable radiographs taken in the Intensive Care Unit (ICU). Current patient demographics were used for the test studies so automatic routing algorithms could be tested. During the test, only three minor reliability problems occurred, two of which were not directly related to the CR unit. One plate was discovered to cause a segmentation error that essentially reduced the image to only black and white with no gray levels. This plate was removed from the inventory to be replaced. Another problem was a PACS routing problem that occurred when the DICOM server with which the CR was communicating had a problem with disk space. The final problem was a network printing failure to the laser cameras. Although the units passed the reliability test, problems with interfacing to workstations were discovered. The two issues that were identified were the interpretation of what constitutes a study for CR and the construction of the look-up table for a proper gray scale display.

  14. Ultimately Reliable Pyrotechnic Systems

    Scott, John H.; Hinkel, Todd


    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing

  15. Scaled CMOS Technology Reliability Users Guide

    White, Mark


    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  16. Ferrite logic reliability study

    Baer, J. A.; Clark, C. B.


    Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

  17. Blade reliability collaborative :

    Ashwill, Thomas D.; Ogilvie, Alistair B.; Paquette, Joshua A.


    The Blade Reliability Collaborative (BRC) was started by the Wind Energy Technologies Department of Sandia National Laboratories and DOE in 2010 with the goal of gaining insight into planned and unplanned O&M issues associated with wind turbine blades. A significant part of BRC is the Blade Defect, Damage and Repair Survey task, which will gather data from blade manufacturers, service companies, operators and prior studies to determine details about the largest sources of blade unreliability. This report summarizes the initial findings from this work.

  18. Operational reliability of standby safety systems

    Grant, G.M.; Atwood, C.L.; Gentillon, C.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others


    The Idaho National Engineering Laboratory (INEL) is evaluating the operational reliability of several risk-significant standby safety systems based on the operating experience at US commercial nuclear power plants from 1987 through 1993. The reliability assessed is the probability that the system will perform its Probabilistic Risk Assessment (PRA) defined safety function. The quantitative estimates of system reliability are expected to be useful in risk-based regulation. This paper is an overview of the analysis methods and the results of the high pressure coolant injection (HPCI) system reliability study. Key characteristics include (1) descriptions of the data collection and analysis methods, (2) the statistical methods employed to estimate operational unreliability, (3) a description of how the operational unreliability estimates were compared with typical PRA results, both overall and for each dominant failure mode, and (4) a summary of results of the study.

  19. Reliability in automotive and mechanical engineering determination of component and system reliability

    Bertsche, Bernd


    In the present contemporary climate of global competition in every branch of engineering and manufacture it has been shown from extensive customer surveys that above every other attribute, reliability stands as the most desired feature in a finished product. To survive this relentless fight for survival any organisation, which neglect the plea of attaining to excellence in reliability, will do so at a serious cost Reliability in Automotive and Mechanical Engineering draws together a wide spectrum of diverse and relevant applications and analyses on reliability engineering. This is distilled into this attractive and well documented volume and practising engineers are challenged with the formidable task of simultaneously improving reliability and reducing the costs and down-time due to maintenance. The volume brings together eleven chapters to highlight the importance of the interrelated reliability and maintenance disciplines. They represent the development trends and progress resulting in making this book ess...

  20. Estimation of the reliability of all-ceramic crowns using finite element models and the stress-strength interference theory.

    Li, Yan; Chen, Jianjun; Liu, Jipeng; Zhang, Lei; Wang, Weiguo; Zhang, Shaofeng


    The reliability of all-ceramic crowns is of concern to both patients and doctors. This study introduces a new methodology for quantifying the reliability of all-ceramic crowns based on the stress-strength interference theory and finite element models. The variables selected for the reliability analysis include the magnitude of the occlusal contact area, the occlusal load and the residual thermal stress. The calculated reliabilities of crowns under different loading conditions showed that too small occlusal contact areas or too great a difference of the thermal coefficient between veneer and core layer led to high failure possibilities. There results were consistent with many previous reports. Therefore, the methodology is shown to be a valuable method for analyzing the reliabilities of the restorations in the complicated oral environment.

  1. Determining the reliability function of the thermal power system in power plant "Nikola Tesla, Block B1"

    Kalaba Dragan V.


    Full Text Available Representation of probabilistic technique for evaluation of thermal power system reliability is the main subject of this paper. The system of thermal power plant under study consists of three subsystems and the reliability assessment is based on a sixteen-year failure database. By applying the mathematical theory of reliability to exploitation research data and using complex two-parameter Weibull distribution, the theoretical reliability functions of specified system have been determined. Obtained probabilistic laws of failure occurrence have confirmed a hypothesis that the distribution of the observed random variable fully describes behaviour of such a system in terms of reliability. Shown results make possible to acquire a better knowledge of current state of the system, as well as a more accurate estimation of its behavior during future exploitation. Final benefit is opportunity for potential improvement of complex system maintenance policies aimed at the reduction of unexpected failure occurrences.

  2. Reliability Testing the Die-Attach of CPV Cell Assemblies

    Bosco, N.; Sweet, C.; Kurtz, S.


    Results and progress are reported for a course of work to establish an efficient reliability test for the die-attach of CPV cell assemblies. Test vehicle design consists of a ~1 cm2 multijunction cell attached to a substrate via several processes. A thermal cycling sequence is developed in a test-to-failure protocol. Methods of detecting a failed or failing joint are prerequisite for this work; therefore both in-situ and non-destructive methods, including infrared imaging techniques, are being explored as a method to quickly detect non-ideal or failing bonds.

  3. Improving consistency in findings from pharmacoepidemiological studies: The IMI-protect (Pharmacoepidemiological research on outcomes of therapeutics by a European consortium) project

    De Groot, Mark C.H.; Schlienger, Raymond; Reynolds, Robert; Gardarsdottir, Helga; Juhaeri, Juhaeri; Hesse, Ulrik; Gasse, Christiane; Rottenkolber, Marietta; Schuerch, Markus; Kurz, Xavier; Klungel, Olaf H.


    Background: Pharmacoepidemiological (PE) research should provide consistent, reliable and reproducible results to contribute to the benefit-risk assessment of medicines. IMI-PROTECT aims to identify sources of methodological variations in PE studies using a common protocol and analysis plan across d

  4. PARENT Program for DMW(Dissimilar Metal Weld) Reliability Assessment

    Kang, Sung Sik; Kim, Kyung Jo; Jung, Hae Dong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)


    Some cracks were found in dissimilar metal welds (DMW), which are connected with major components of nuclear power plants. Usually, the dissimilar metal welds are consisted of Alloy 600, carbon steel and stainless steel. Since 2000s, most of the cracks are found in welds, especially dissimilar metal welds such as pressurizer safety relief nozzle, reactor head penetration, reactor bottom mounted instrumentation (BMI), and reactor nozzles. Since the cracks are revealed as a primary water stress corrosion cracking (PWSCC), the reliability of non destructive evaluation (NDE) technique becomes more important. To cope with the NDE reliability, PINC (program for inspection of nickel alloy components) international cooperation was organized. The aim of the project was 1) to fabricate representative NDE mock-ups with flaws to simulate PWSCCs, 2) to identify and quantitatively assess NDE methods for accurately detecting, sizing and characterizing PWSCCs, 3) to document the range of locations and morphologies of PWSCCs and 4) to incorporate results with other results of ongoing PWSCC research programs, as appropriate. Since the last KNS autumn meeting, the PINC program was finalized and the next program PARENT (Program to Assess Reliability for Emerging NDE Technique) is started on June this year. In this study, as part of the PINC project, international RRT (round robin test) results for DMW will be introduced and the status of new PARENT program will be introduced

  5. A Reliability-Oriented Design Method for Power Electronic Converters

    Wang, Huai; Zhou, Dao; Blaabjerg, Frede


    handbook) to the physics-of-failure approach and design for reliability process. A systematic design procedure consisting of various design tools is presented in this paper to design reliability into the power electronic converters since the early concept phase. The corresponding design procedures...

  6. Discrepancy Score Reliabilities in the WISC-IV Standardization Sample

    Glass, Laura A.; Ryan, Joseph J.; Charter, Richard A.; Bartels, Jared M.


    This investigation provides internal consistency reliabilities for Wechsler Intelligence Scale for Children--Fourth Edition (WISC-IV) subtest and index discrepancy scores using the standardization sample as the data source. Reliabilities range from 0.50 to 0.82 for subtest discrepancy scores and from 0.78 to 0.88 for index discrepancy scores.…

  7. Discrepancy Score Reliabilities in the WAIS-IV Standardization Sample

    Glass, Laura A.; Ryan, Joseph J.; Charter, Richard A.


    In the present investigation, the authors provide internal consistency reliabilities for Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtest and Index discrepancy scores using the standardization sample as the data source. Reliabilities ranged from 0.55 to 0.88 for subtest discrepancy scores and 0.80 to 0.91 for Index discrepancy…

  8. Discrepancy Score Reliabilities in the WAIS-IV Standardization Sample

    Glass, Laura A.; Ryan, Joseph J.; Charter, Richard A.


    In the present investigation, the authors provide internal consistency reliabilities for Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtest and Index discrepancy scores using the standardization sample as the data source. Reliabilities ranged from 0.55 to 0.88 for subtest discrepancy scores and 0.80 to 0.91 for Index discrepancy…

  9. Distribution Reliability of the Energy Supply Chain

    Hidayat Rachmad


    Full Text Available Reliability of an oil fuel distribution system can be achieved by considering multiple attributes. Load-point indicators consisted of the frequency of failures, average duration of an outage and average annual outage time. Indicators of system performance were SAIFI and SAIDI and the method of the study was the Learning Vector Quantization (LVQ. Decisions were made by considering the multi-attribute oil fuel distribution system to determine high reliability, which was simulated on the agent. The use of agents in the simulation to determine the oil fuel distribution system reliability helped visualize the oil fuel distribution in meeting the quality of fuels received by consumers and made it easier to learn the problems faced by oil fuel distribution.

  10. Internal Consistency and Convergent Validity of the Klontz Money Behavior Inventory (KMBI

    Colby D. Taylor


    Full Text Available The Klontz Money Behavior Inventory (KMBI is a standalone, multi-scale measure than can screen for the presence of eight distinct money disorders. Given the well-established relationship between mental health and financial behaviors, results from the KMBI can be used to inform both mental health care professionals and financial planners. The present study examined the internal consistency and convergent validity of the KMBI, through comparison with similar measures, among a sample of college students (n = 232. Results indicate that the KMBI demonstrates acceptable internal consistency reliability and some convergence for most subscales when compared to other analogous measures. These findings highlight a need for literature and assessments to identify and describe disordered money behaviors.

  11. Escala de Autoestima de Rosenberg (EAR: validade fatorial e consistência interna Rosenberg Self-Esteem Scale (RSS: factorial validity and internal consistency

    Juliana Burges Sbicigo


    Full Text Available O objetivo deste estudo foi investigar as propriedades psicométricas da Escala de Autoestima de Rosenberg (EAR para adolescentes. Participaram 4.757 adolescentes, com idades entre 14 e 18 anos (M=15,77; DP=1,22, de nove cidades brasileiras. Os participantes responderam a uma versão da EAR adaptada para o Brasil. A análise fatorial exploratória apontou uma estrutura bidimensional, com 51.4% da variância explicada, que foi sustentada pela análise fatorial confirmatória. As análises de consistência interna realizadas por meio do coeficiente alfa de Cronbach, confiabilidade composta e variância extraída indicaram bons valores de fidedignidade. Diferenças nos escores de autoestima em função do sexo e da idade não foram encontradas. Conclui-se que a EAR apresenta qualidades psicométricas satisfatórias, mostrando-se um instrumento confiável para medir autoestima em adolescentes brasileiros.The aim of this study was to investigate the psychometrics properties of the Rosenberg Self-Esteem Scale (RSS for adolescents. The sample was composed of 4.757 adolescents, with ages between 14 and 18 years old (M=15.77; SD=1.22 in nine Brazilian cities. Participants responded to an adapted version of the RSS for Brazil. Exploratory factorial analysis showed a bidimensional structure, with 51.4% of explained variance. This result was supported by confirmatory factor analysis. The internal consistency analysis by Cronbach alpha coefficient, composite reliability and extracted variance indicated good reliability. Differences in self-esteem for gender and age were not found. These findings show that RSS has satisfactory psychometric qualities and it's a reliable instrument to assess self-esteem in Brazilian adolescents.

  12. Turkish validity and reliability of Organ Donation Attitude Scale.

    Yazici Sayin, Yazile


    To report the translation and adaptation process from English to Turkish and the psychometric estimates of the validity and reliability of The Organ Donation Attitude Scale Turkish. Its aim (1) is to provide data about and (2) to assess Turkish people's attitudes and volunteerism towards organ donation. Lack of donors is a significant problem for organ transplantation worldwide. Attitudes about organ donation and volunteerism are important factors in the lack of donors. To collect survey data from Turkish participants, a cross-sectional design was used: the classical measurement method. The Organ Donation Attitude Scale was translated from English to Turkish and back-translated into English. The analysis included a total of 892 Turkish participants. The validity of the scale was confirmed by exploratory factor analysis and criterion-relation validity testing. A test-retest procedure was implemented for the reliability of the scale over time. The Organ Donation Attitude Scale consists of three relatively independent components: humanity and moral conviction, fears of medical neglect and fears of bodily mutilation. Internal consistency of these three components resulted in acceptable Cronbach's α levels. Positive correlation occurred between the volunteerism score and positive attitude about organ donation. The correlation between volunteerism score and negative attitude about organ donation was negative. Fears of bodily mutilation were most significantly related to unwillingness to commit to organ donation. The test-retest correlation coefficients proves that the Organ Donation Attitude Scale were reliable over time. The Organ Donation Attitude Scale Turkish version is both a reliable and valid instrument that can be useful in measuring positive and negative attitudes of Turkish people about organ donation. With the Organ Donation Attitude Scale, researchers in Turkey will be able to ascertain important data on volunteerism and attitudes towards organ donation

  13. The utility of theory of planned behavior in predicting consistent ...


    outcomes of the behavior and the evaluations of these outcomes (behavioral beliefs) ... belief towards consistent condom use and motivation for compliance with .... consistency of the items used before constructing a scale. Results. All of the ...

  14. [Training of sensorial panels consisting of children].

    Wittig de Penna, E; Bunger Timermann, A; Serrano Valdés, L


    In the development of food products for children, it is advisable to establish the characteristics of the product with groups of children that represent the target population. To ensure the success of the products, the quality and hedonic satisfaction expectatives must be considered. In order to accomplish this premises, a group of children under the Program of Complementary Feeding of the Health Ministry--was selected and trained. The project was developed with a group of 33 children ages 9 to 12 years--from the Republica of Colombia School of Santiago, whose parents agreed and supported the participation of their children in this project. The first step was teaching the technics and methodology of Sensory Evaluation, and increasing their sensitivity. After the 8 programmed sessions, those children who met the minimal requirements for a training group were chosen. The second step was performed during 12 sessions, working with 14 children. The training was aimed at the development of the vocabulary to describe quality and defects, ranking tests, discriminative tests and the use of different scales. Tests to verify reliability, veracity and reproducibility of judgements (p < 0.05) were carried out. The trained group was able to assess different meals of the Program. The obtained results allowed to propose the improvement of some quality criteria of the Program meals.

  15. Validity and Reliability of the Clinical Competency Evaluation Instrument for Use among Physiotherapy Students; Pilot study

    Zailani Muhamad


    Full Text Available Objectives: The aim of this study was to determine the content validity, internal consistency, testretest reliability and inter-rater reliability of the Clinical Competency Evaluation Instrument (CCEVI in assessing the clinical performance of physiotherapy students. Methods: This study was carried out between June and September 2013 at University Kebangsaan Malaysia (UKM, Kuala Lumpur, Malaysia. A panel of 10 experts were identified to establish content validity by evaluating and rating each of the items used in the CCEVI with regards to their relevance in measuring students’ clinical competency. A total of 50 UKM undergraduate physiotherapy students were assessed throughout their clinical placement to determine the construct validity of these items. The instrument’s reliability was determined through a cross-sectional study involving a clinical performance assessment of 14 final-year undergraduate physiotherapy students. Results: The content validity index of the entire CCEVI was 0.91, while the proportion of agreement on the content validity indices ranged from 0.83–1.00. The CCEVI construct validity was established with factor loading of ≥0.6, while internal consistency (Cronbach’s alpha overall was 0.97. Test-retest reliability of the CCEVI was confirmed with a Pearson’s correlation range of 0.91–0.97 and an intraclass coefficient correlation range of 0.95–0.98. Inter-rater reliability of the CCEVI domains ranged from 0.59 to 0.97 on initial and subsequent assessments. Conclusion: This pilot study confirmed the content validity of the CCEVI. It showed high internal consistency, thereby providing evidence that the CCEVI has moderate to excellent inter-rater reliability. However, additional refinement in the wording of the CCEVI items, particularly in the domains of safety and documentation, is recommended to further improve the validity and reliability of the instrument.

  16. Structural Reliability of Plain Bearings for Wave Energy Converter Applications

    Simon Ambühl


    Full Text Available The levelized cost of energy (LCOE from wave energy converters (WECs needs to be decreased in order to be able to become competitive with other renewable electricity sources. Probabilistic reliability methods can be used to optimize the structure of WECs. Optimization is often performed for critical structural components, like welded details, bolts or bearings. This paper considers reliability studies with a focus on plain bearings available from stock for the Wavestar device, which exists at the prototype level. The Wavestar device is a point absorber WEC. The plan is to mount a new power take-off (PTO system consisting of a discrete displacement cylinder (DDC, which will allow different hydraulic cycles to operate at constant pressure levels. This setup increases the conversion efficiency, as well as decouples the electricity production from the pressure variations within the hydraulic cycle when waves are passing. The new PTO system leads to different load characteristics at the floater itself compared to the actual setup where the turbine/generator is directly coupled to the fluctuating hydraulic pressure within the PTO system. This paper calculates the structural reliability of the different available plain bearings planned to be mounted at the new PTO system based on a probabilistic approach, and the paper gives suggestions for fulfilling the minimal target reliability levels. The considered failure mode in this paper is the brittle fatigue failure of plain bearings. The performed sensitivity analysis shows that parameters defining the initial crack size have a big impact on the resulting reliability of the plain bearing.

  17. OSS reliability measurement and assessment

    Yamada, Shigeru


    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  18. Reliability and validity in research.

    Roberts, Paula; Priest, Helena

    This article examines reliability and validity as ways to demonstrate the rigour and trustworthiness of quantitative and qualitative research. The authors discuss the basic principles of reliability and validity for readers who are new to research.

  19. Consistency Checking of Web Service Contracts

    Cambronero, M. Emilia; Okika, Joseph C.; Ravn, Anders Peter


    Behavioural properties are analyzed for web service contracts formulated in Business Process Execution Language (BPEL) and Choreography Description Language (CDL). The key result reported is an automated technique to check consistency between protocol aspects of the contracts. The contracts...... are abstracted to (timed) automata and from there a simulation is set up, which is checked using automated tools for analyzing networks of finite state processes. Here we use the Concurrency Work Bench. The proposed techniques are illustrated with a case study that include otherwise difficult to analyze fault...

  20. Reliability-Based Optimization and Optimal Reliability Level of Offshore Wind Turbines

    Tarp-Johansen, N.J.; Sørensen, John Dalsgaard


    Different formulations relevant for the reliability-based optimization of offshore wind turbines are presented, including different reconstruction policies in case of failure. Illustrative examples are presented and, as a part of the results, optimal reliability levels for the different failure m...

  1. Reliability-Based Optimization and Optimal Reliability Level of Offshore Wind Turbines

    Sørensen, John Dalsgaard; Tarp-Johansen, N.J.


    Different formulations relevant for the reliability-based optimization of offshore wind turbines are presented, including different reconstruction policies in case of failure. Illustrative examples are presented and, as a part of the results, optimal reliability levels for the different failure...



    The mechanical reliability and optimization theory on the method of reliability-optimization design for the new roller orientation clutch is provided. The result of reliability-optimization design is compared with the result of the conventional design method.

  3. Study on segmented distribution for reliability evaluation

    Huaiyuan Li


    Full Text Available In practice, the failure rate of most equipment exhibits different tendencies at different stages and even its failure rate curve behaves a multimodal trace during its life cycle. As a result, traditionally evaluating the reliability of equipment with a single model may lead to severer errors. However, if lifetime is divided into several different intervals according to the characteristics of its failure rate, piecewise fitting can more accurately approximate the failure rate of equipment. Therefore, in this paper, failure rate is regarded as a piecewise function, and two kinds of segmented distribution are put forward to evaluate reliability. In order to estimate parameters in the segmented reliability function, Bayesian estimation and maximum likelihood estimation (MLE of the segmented distribution are discussed in this paper. Since traditional information criterion is not suitable for the segmented distribution, an improved information criterion is proposed to test and evaluate the segmented reliability model in this paper. After a great deal of testing and verification, the segmented reliability model and its estimation methods presented in this paper are proven more efficient and accurate than the traditional non-segmented single model, especially when the change of the failure rate is time-phased or multimodal. The significant performance of the segmented reliability model in evaluating reliability of proximity sensors of leading-edge flap in civil aircraft indicates that the segmented distribution and its estimation method in this paper could be useful and accurate.

  4. Reliability and Its Quantitative Measures

    Alexandru ISAIC-MANIU


    Full Text Available In this article is made an opening for the software reliability issues, through wide-ranging statistical indicators, which are designed based on information collected from operating or testing (samples. It is developed the reliability issues also for the case of the main reliability laws (exponential, normal, Weibull, which validated for a particular system, allows the calculation of some reliability indicators with a higher degree of accuracy and trustworthiness

  5. Production Facility System Reliability Analysis Report

    Dale, Crystal Buchanan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This document describes the reliability, maintainability, and availability (RMA) modeling of the Los Alamos National Laboratory (LANL) design for the Closed Loop Helium Cooling System (CLHCS) planned for the NorthStar accelerator-based 99Mo production facility. The current analysis incorporates a conceptual helium recovery system, beam diagnostics, and prototype control system into the reliability analysis. The results from the 1000 hr blower test are addressed.

  6. Gearbox Reliability Collaborative (GRC) Description and Loading

    Oyague, F.


    This document describes simulated turbine load cases in accordance to the IEC 61400-1 Ed.3 standard, which is representative of the typical wind turbine design process. The information presented herein is intended to provide a broad understanding of the gearbox reliability collaborative 750kW drivetrain and turbine configuration. In addition, fatigue and ultimate strength drivetrain loads resulting from simulations are presented. This information provides the bases for the analytical work of the gearbox reliability collaborative effort.

  7. 2017 NREL Photovoltaic Reliability Workshop

    Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)


    NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.

  8. Testing for PV Reliability (Presentation)

    Kurtz, S.; Bansal, S.


    The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

  9. Reliable Quantum Computers

    Preskill, J


    The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)

  10. UCLA Loneliness Scale (Version 3): reliability, validity, and factor structure.

    Russell, D W


    In this article I evaluated the psychometric properties of the UCLA Loneliness Scale (Version 3). Using data from prior studies of college students, nurses, teachers, and the elderly, analyses of the reliability, validity, and factor structure of this new version of the UCLA Loneliness Scale were conducted. Results indicated that the measure was highly reliable, both in terms of internal consistency (coefficient alpha ranging from .89 to .94) and test-retest reliability over a 1-year period (r = .73). Convergent validity for the scale was indicated by significant correlations with other measures of loneliness. Construct validity was supported by significant relations with measures of the adequacy of the individual's interpersonal relationships, and by correlations between loneliness and measures of health and well-being. Confirmatory factor analyses indicated that a model incorporating a global bipolar loneliness factor along with two method factor reflecting direction of item wording provided a very good fit to the data across samples. Implications of these results for future measurement research on loneliness are discussed.

  11. Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.


    When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach). This methodology considers a period to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the model behaves satisfactory. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each station, a BReach analysis is performed and subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear consistent with this knowledge of historical changes and facilitates thus a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model evaluation design about consistent time periods to analyze).

  12. Reliability of Hydrox explosive blasting

    Chikunov, V.I.; Chulkov, O.G.; Domanov, V.P.


    The safest method of blasting in coal mines with methane and coal dust hazards is with the flameless Hydrox charges. The results of operational tests on Hydrox BV-A2U charges with a I-43 initiator in underground coal mines are discussed. Efficiency and reliability of blasting using Hydrox BV-A2U compared to BV-48 Hydrox charges is evaluated. Results of blasting and the percentage of charge failures are given in tables. It is suggested that BV-A2U Hydrox charges are superior to BV-48, as no charge failures occur, operational time of BV-A2U is up to 5 seconds and the maximum operational time spread is 1.8 sec (weight of initiator 0.05 kg). Blasting properties of BV-A2U are stable and do not change as a result of long storage. (In Russian)

  13. Reliability of Power Units in Poland and the World

    Józef Paska


    Full Text Available One of a power system’s subsystems is the generation subsystem consisting of power units, the reliability of which to a large extent determines the reliability of the power system and electricity supply to consumers. This paper presents definitions of the basic indices of power unit reliability used in Poland and in the world. They are compared and analysed on the basis of data published by the Energy Market Agency (Poland, NERC (North American Electric Reliability Corporation – USA, and WEC (World Energy Council. Deficiencies and the lack of a unified national system for collecting and processing electric power equipment unavailability data are also indicated.

  14. Reliability of physical examination tests for the diagnosis of knee disorders: Evidence from a systematic review.

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Desmeules, François


    Clinicians often rely on physical examination tests to guide them in the diagnostic process of knee disorders. However, reliability of these tests is often overlooked and may influence the consistency of results and overall diagnostic validity. Therefore, the objective of this study was to systematically review evidence on the reliability of physical examination tests for the diagnosis of knee disorders. A structured literature search was conducted in databases up to January 2016. Included studies needed to report reliability measures of at least one physical test for any knee disorder. Methodological quality was evaluated using the QAREL checklist. A qualitative synthesis of the evidence was performed. Thirty-three studies were included with a mean QAREL score of 5.5 ± 0.5. Based on low to moderate quality evidence, the Thessaly test for meniscal injuries reached moderate inter-rater reliability (k = 0.54). Based on moderate to excellent quality evidence, the Lachman for anterior cruciate ligament injuries reached moderate to excellent inter-rater reliability (k = 0.42 to 0.81). Based on low to moderate quality evidence, the Tibiofemoral Crepitus, Joint Line and Patellofemoral Pain/Tenderness, Bony Enlargement and Joint Pain on Movement tests for knee osteoarthritis reached fair to excellent inter-rater reliability (k = 0.29 to 0.93). Based on low to moderate quality evidence, the Lateral Glide, Lateral Tilt, Lateral Pull and Quality of Movement tests for patellofemoral pain reached moderate to good inter-rater reliability (k = 0.49 to 0.73). Many physical tests appear to reach good inter-rater reliability, but this is based on low-quality and conflicting evidence. High-quality research is required to evaluate the reliability of knee physical examination tests.

  15. Reliability and minimal detectable change of the weight-bearing lunge test: A systematic review.

    Powden, Cameron J; Hoch, Johanna M; Hoch, Matthew C


    Ankle dorsiflexion range of motion (DROM) is often a point of emphasis during the rehabilitation of lower extremity pathologies. With the growing popularity of weight-bearing DROM assessments, several versions of the weight-bearing lunge (WBLT) test have been developed and numerous reliability studies have been conducted. The purpose of this systematic review was to critically appraise and synthesize the studies which examined the reliability and responsiveness of the WBLT to assess DROM. A systematic search of PubMed and EBSCO Host databases from inception to September 2014 was conducted to identify studies whose primary aim was assessing the reliability of the WBLT. The Quality Appraisal of Reliability Studies assessment tool was utilized to determine the quality of included studies. Relative reliability was examined through intraclass correlation coefficients (ICC) and responsiveness was evaluated through minimal detectable change (MDC). A total of 12 studies met the eligibility criteria and were included. Nine included studies assessed inter-clinician reliability and 12 included studies assessed intra-clinician reliability. There was strong evidence that inter-clinician reliability (ICC = 0.80-0.99) as well as intra-clinician reliability (ICC = 0.65-0.99) of the WBLT is good. Additionally, average MDC scores of 4.6° or 1.6 cm for inter-clinician and 4.7° or 1.9 cm for intra-clinician were found, indicating the minimal change in DROM needed to be outside the error of the WBLT. This systematic review determined that the WBLT, regardless of method, can be used clinically to assess DROM as it provides consistent results between one or more clinicians and demonstrates reasonable responsiveness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Damage-consistent hazard assessment - the revival of intensities

    Klügel, Jens-Uwe


    Proposed key-note speech (Introduction of session). Current civil engineering standards for residential buildings in many countries are based on (frequently probabilistic) seismic hazard assessments using ground motion parameters like peak ground accelerations or pseudo displacements as hazard parameters. This approach has its roots in the still wide spread force-based design of structures using simplified methods like linear response spectra in combination with equivalent static forces procedures for the design of structures. In the engineering practice this has led to practical problems because it's not economic to design structures against the maximum forces of earthquakes. Furthermore, a completely linear-elastic response of structures is seldom required. Different types of reduction factors (performance-dependent response factors) considering for example overstrength, structural redundancy and structural ductility have been developed in different countries for compensating the use of simplified and conservative design methods. This has the practical consequence that the methods used in engineering as well as the output results of hazard assessment studies are poorly related to the physics of damaging. Reliable predictions for the response of structures under earthquake loading using such simplified design methods are not feasible. In dependence of the type of structures damage may be controlled by hazard parameters that are different from ground motion accelerations. Furthermore, a realistic risk assessment has to be based on reliable predictions of damage. This is crucial for effective decision-making. This opens the space for a return to the use of intensities as the key output parameter of seismic hazard assessment. Site intensities (e.g. EMS-98) are very well correlated to the damage of structures. They can easily be converted into the required set of engineering parameters or even directly into earthquake time-histories suitable for structural analysis

  17. A Reliability Generalization Study of the Geriatric Depression Scale.

    Kieffer, Kevin M.; Reese, Robert J.


    Conducted a reliability generalization study of the Geriatric Depression Scale (T. Brink and others, 1982). Results from this investigation of 338 studies shows that the average score reliability across studies was 0.8482 and identifies the most important predictors of score reliability. (SLD)

  18. Electronics reliability calculation and design

    Dummer, Geoffrey W A; Hiller, N


    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  19. Compact difference approximation with consistent boundary condition

    FU Dexun; MA Yanwen; LI Xinliang; LIU Mingyu


    For simulating multi-scale complex flow fields it should be noted that all the physical quantities we are interested in must be simulated well. With limitation of the computer resources it is preferred to use high order accurate difference schemes. Because of their high accuracy and small stencil of grid points computational fluid dynamics (CFD) workers pay more attention to compact schemes recently. For simulating the complex flow fields the treatment of boundary conditions at the far field boundary points and near far field boundary points is very important. According to authors' experience and published results some aspects of boundary condition treatment for far field boundary are presented, and the emphasis is on treatment of boundary conditions for the upwind compact schemes. The consistent treatment of boundary conditions at the near boundary points is also discussed. At the end of the paper are given some numerical examples. The computed results with presented method are satisfactory.

  20. Test - retest reliability of two instruments for measuring public attitudes towards persons with mental illness

    Leufstadius Christel


    Full Text Available Abstract Background Research has identified stigmatization as a major threat to successful treatment of individuals with mental illness. As a consequence several anti-stigma campaigns have been carried out. The results have been discouraging and the field suffers from lack of evidence about interventions that work. There are few reports on psychometric data for instruments used to assess stigma, which thus complicates research efforts. The aim of the present study was to investigate test-retest reliability of the Swedish versions of the questionnaires: FABI and "Changing Minds" and to examine the internal consistency of the two instruments. Method Two instruments, fear and behavioural intentions (FABI and "Changing Minds", used in earlier studies on public attitudes towards persons with mental illness were translated into Swedish and completed by 51 nursing students on two occasions, with an interval of three weeks. Test-retest reliability was calculated by using weighted kappa coefficient and internal consistency using the Cronbach's alpha coefficient. Results Both instruments attain at best moderate test-retest reliability. For the Changing Minds questionnaire almost one fifth (17.9% of the items present poor test-retest reliability and the alpha coefficient for the subscales ranges between 0.19 - 0.46. All of the items in the FABI reach a fair or a moderate agreement between the test and retest, and the questionnaire displays a high internal consistency, alpha 0.80. Conclusions There is a need for development of psychometrically tested instruments within this field of research.

  1. A Novel Two-Terminal Reliability Analysis for MANET

    Xibin Zhao; Zhiyang You; Hai Wan


    Mobile ad hoc network (MANET) is a dynamic wireless communication network. Because of the dynamic and infrastructureless characteristics, MANET is vulnerable in reliability. This paper presents a novel reliability analysis for MANET. The node mobility effect and the node reliability based on a real MANET platform are modeled and analyzed. An effective Monte Carlo method for reliability analysis is proposed. A detailed evaluation is performed in terms of the experiment results.

  2. A Novel Two-Terminal Reliability Analysis for MANET

    Xibin Zhao


    Full Text Available Mobile ad hoc network (MANET is a dynamic wireless communication network. Because of the dynamic and infrastructureless characteristics, MANET is vulnerable in reliability. This paper presents a novel reliability analysis for MANET. The node mobility effect and the node reliability based on a real MANET platform are modeled and analyzed. An effective Monte Carlo method for reliability analysis is proposed. A detailed evaluation is performed in terms of the experiment results.

  3. Infants prefer to imitate a reliable person.

    Poulin-Dubois, Diane; Brooker, Ivy; Polonia, Alexandra


    Research has shown that preschoolers prefer to learn from individuals who are a reliable source of information. The current study examined whether the past reliability of a person's emotional signals influences infants' willingness to imitate that person. An emotional referencing task was first administered to infants in order to demonstrate the experimenter's credibility or lack thereof. Next, infants in both conditions watched as the same experimenter turned on a touch light using her forehead. Infants were then given the opportunity to reproduce this novel action. As expected, infants in the unreliable condition developed the expectation that the person's emotional cues were misleading. Thus, these infants were subsequently more likely to use their hands than their foreheads when attempting to turn on the light. In contrast, infants in the reliable group were more likely to imitate the experimenter's action using their foreheads. These results suggest that the reliability of the model influences infants' imitation.

  4. Flight control electronics reliability/maintenance study

    Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.


    Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.

  5. Mathematical reliability an expository perspective

    Mazzuchi, Thomas; Singpurwalla, Nozer


    In this volume consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. Topics like finance, forensics, information, and orthopedics, as well as the more traditional reliability topics were purposefully undertaken to make this collection different from the existing books in reliability. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The seven parts are networks and systems; recurrent events; information and design; failure rate function and burn-in; software reliability and random environments; reliability in composites and orthopedics, and reliability in finance and forensics. Embedded within the above are some of the other currently active topics such as causality, cascading, exchangeability, expert testimony, hierarchical modeling, optimization and survival...

  6. Evaluating Temporal Consistency in Marine Biodiversity Hotspots.

    Piacenza, Susan E; Thurman, Lindsey L; Barner, Allison K; Benkwitt, Cassandra E; Boersma, Kate S; Cerny-Chipman, Elizabeth B; Ingeman, Kurt E; Kindinger, Tye L; Lindsley, Amy J; Nelson, Jake; Reimer, Jessica N; Rowe, Jennifer C; Shen, Chenchen; Thompson, Kevin A; Heppell, Selina S


    With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monitoring dataset collected over an eight year period off the US Pacific Coast, we developed a methodological approach for avoiding biases associated with hotspot delineation. We aggregated benthic fish species data from research trawls and calculated mean hotspot thresholds for fish species richness and Shannon's diversity indices over the eight year dataset. We used a spatial frequency distribution method to assign hotspot designations to the grid cells annually. We found no areas containing consistently high biodiversity through the entire study period based on the mean thresholds, and no grid cell was designated as a hotspot for greater than 50% of the time-series. To test if our approach was sensitive to sampling effort and the geographic extent of the survey, we followed a similar routine for the northern region of the survey area. Our finding of low consistency in benthic fish biodiversity hotspots over time was upheld, regardless of biodiversity metric used, whether thresholds were calculated per year or across all years, or the spatial extent for which we calculated thresholds and identified hotspots. Our results suggest that static measures of benthic fish biodiversity off the US West Coast are insufficient for identification of hotspots and that long-term data are required to appropriately identify patterns of high temporal variability in biodiversity for these highly mobile taxa. Given that ecological communities are responding to a changing climate and other

  7. Evaluating temporal consistency of long-term global NDVI datasets for trend analysis

    Tian, Feng; Fensholt, Rasmus; Verbesselt, Jan


    As a way to understand vegetation changes, trend analysis on NDVI (normalized difference vegetation index) time series data have been widely performed at regional to global scales. However, most long-term NDVI datasets are based upon multiple sensor systems and unsuccessful corrections related...... to sensor shifts potentially introduce substantial uncertainties and artifacts in the analysis of trends. The temporal consistency of NDVI datasets should therefore be evaluated before performing trend analysis to obtain reliable results. In this study we analyze the temporal consistency of multi......-sensor NDVI time series by analyzing the co-occurrence between breaks in the NDVI time series and sensor shifts from GIMMS3g (Global Inventory Modeling and Mapping Studies 3rd generation), VIP3 (Vegetation Index and Phenology version 3), LTDR4 (Long Term Data Record version 4) and SPOT-VGT (Système Pour l...

  8. Measuring consistency of autobiographical memory recall in depression.

    Semkovska, Maria


    Autobiographical amnesia assessments in depression need to account for normal changes in consistency over time, contribution of mood and type of memories measured. We report herein validation studies of the Columbia Autobiographical Memory Interview - Short Form (CAMI-SF), exclusively used in depressed patients receiving electroconvulsive therapy (ECT) but without previous published report of normative data. The CAMI-SF was administered twice with a 6-month interval to 44 healthy volunteers to obtain normative data for retrieval consistency of its Semantic, Episodic-Extended and Episodic-Specific components and assess their reliability and validity. Healthy volunteers showed significant large decreases in retrieval consistency on all components. The Semantic and Episodic-Specific components demonstrated substantial construct validity. We then assessed CAMI-SF retrieval consistencies over a 2-month interval in 30 severely depressed patients never treated with ECT compared with healthy controls (n=19). On initial assessment, depressed patients produced less episodic-specific memories than controls. Both groups showed equivalent amounts of consistency loss over a 2-month interval on all components. At reassessment, only patients with persisting depressive symptoms were distinguishable from controls on episodic-specific memories retrieved. Research quantifying retrograde amnesia following ECT for depression needs to control for normal loss in consistency over time and contribution of persisting depressive symptoms.

  9. Consistency analysis of accelerated degradation mechanism based on gray theory

    Yunxia Chen; Hongxia Chen; Zhou Yang; Rui Kang; Yi Yang


    A fundamental premise of an accelerated testing is that the failure mechanism under elevated and normal stress levels should remain the same. Thus, verification of the consistency of failure mechanisms is essential during an accelerated testing. A new consistency analysis method based on the gray theory is pro-posed for complex products. First of al , existing consistency ana-lysis methods are reviewed with a focus on the comparison of the differences among them. Then, the proposed consistency ana-lysis method is introduced. Two effective gray prediction models, gray dynamic model and new information and equal dimensional (NIED) model, are adapted in the proposed method. The process to determine the dimension of NIED model is also discussed, and a decision rule is expanded. Based on that, the procedure of ap-plying the new consistent analysis method is developed. Final y, a case study of the consistency analysis of a reliability enhancement testing is conducted to demonstrate and validate the proposed method.

  10. NERF - A Computer Program for the Numerical Evaluation of Reliability Functions - Reliability Modelling, Numerical Methods and Program Documentation,


    Industry Australian Atomic Energy Commission, Director CSIROj Materials Science Division, Library Trans-Australia Airlines, Library Qantas Airways ...designed to evaluate the reliability functions that result from the application of reliability analysis to the fatigue of aircraft structures, in particular...Messages 60+ A.4. Program Assembly 608 DISTRIBUTION DOCUMENT CONTROL DATA II 1. INTRODUCTION The application of reliability analysis to the fatigue


    Serkan Volkan SARI


    results, it is seen that alpha coefficient of first group is 0,831 and the second group is 0,834. In the summary, the following conclusions are reached about the scale. In reliability analysis, internal consistency coefficients are obtained with more than one method and results are supported by each other. As a result of expert opinion, content validity of the scale is provided. As a result of item analysis, items coefficients are sufficient. Also, the validity of the structure of scale is analyzed and factor loadings are adequate and the internal consistency is generally good level. Finally, it is revealed that the scale has a high value of reliability and validity. Also it is thought that the scale should use teachers or researchers who work in the field of peer pressure. It is recommended that the scale would translate different languages for other researches. In addition, it is recommended that researchers who want to develop a scale can follow this research’s step.

  12. Reliability of steam generator tubing

    Kadokami, E. [Mitsubishi Heavy Industries Ltd., Hyogo-ku (Japan)


    The author presents results on studies made of the reliability of steam generator (SG) tubing. The basis for this work is that in Japan the issue of defects in SG tubing is addressed by the approach that any detected defect should be repaired, either by plugging the tube or sleeving it. However, this leaves open the issue that there is a detection limit in practice, and what is the effect of nondetectable cracks on the performance of tubing. These studies were commissioned to look at the safety issues involved in degraded SG tubing. The program has looked at a number of different issues. First was an assessment of the penetration and opening behavior of tube flaws due to internal pressure in the tubing. They have studied: penetration behavior of the tube flaws; primary water leakage from through-wall flaws; opening behavior of through-wall flaws. In addition they have looked at the question of the reliability of tubing with flaws during normal plant operation. Also there have been studies done on the consequences of tube rupture accidents on the integrity of neighboring tubes.

  13. Consistent Design of Dependable Control Systems

    Blanke, M.


    Design of fault handling in control systems is discussed, and a method for consistent design is presented.......Design of fault handling in control systems is discussed, and a method for consistent design is presented....

  14. Probability-consistent spectrum and code spectrum

    沈建文; 石树中


    In the seismic safety evaluation (SSE) for key projects, the probability-consistent spectrum (PCS), usually obtained from probabilistic seismic hazard analysis (PSHA), is not consistent with the design response spectrum given by Code for Seismic Design of Buildings (GB50011-2001). Sometimes, there may be a remarkable difference between them. If the PCS is lower than the corresponding code design response spectrum (CDS), the seismic fortification criterion for the key projects would be lower than that for the general industry and civil buildings. In the paper, the relation between PCS and CDS is discussed by using the ideal simple potential seismic source. The results show that in the most areas influenced mainly by the potential sources of the epicentral earthquakes and the regional earthquakes, PCS is generally lower than CDS in the long periods. We point out that the long-period response spectra of the code should be further studied and combined with the probability method of seismic zoning as much as possible. Because of the uncertainties in SSE, it should be prudent to use the long-period response spectra given by SSE for key projects when they are lower than CDS.

  15. Consistent mutational paths predict eukaryotic thermostability

    van Noort Vera


    Full Text Available Abstract Background Proteomes of thermophilic prokaryotes have been instrumental in structural biology and successfully exploited in biotechnology, however many proteins required for eukaryotic cell function are absent from bacteria or archaea. With Chaetomium thermophilum, Thielavia terrestris and Thielavia heterothallica three genome sequences of thermophilic eukaryotes have been published. Results Studying the genomes and proteomes of these thermophilic fungi, we found common strategies of thermal adaptation across the different kingdoms of Life, including amino acid biases and a reduced genome size. A phylogenetics-guided comparison of thermophilic proteomes with those of other, mesophilic Sordariomycetes revealed consistent amino acid substitutions associated to thermophily that were also present in an independent lineage of thermophilic fungi. The most consistent pattern is the substitution of lysine by arginine, which we could find in almost all lineages but has not been extensively used in protein stability engineering. By exploiting mutational paths towards the thermophiles, we could predict particular amino acid residues in individual proteins that contribute to thermostability and validated some of them experimentally. By determining the three-dimensional structure of an exemplar protein from C. thermophilum (Arx1, we could also characterise the molecular consequences of some of these mutations. Conclusions The comparative analysis of these three genomes not only enhances our understanding of the evolution of thermophily, but also provides new ways to engineer protein stability.

  16. Reliability of the Discounting Inventory: An extension into substance-use population

    Malesza Marta


    Full Text Available Recent research introduced the Discounting Inventory that allows the measurement of individual differences in the delay, probabilistic, effort, and social discounting rates. The goal of this investigation was to determine several aspects of the reliability of the Discounting Inventory using the responses of 385 participants (200 non-smokers and 185 current-smokers. Two types of reliability are of interest. Internal consistency and test-retest stability. A secondary aim was to extend such reliability measures beyond the non-clinical participant. The current study aimed to measure the reliability of the DI in a nicotine-dependent individuals and non-nicotine-dependent individuals. It is concluded that the internal consistency of the DI is excellent, and that the test-retest reliability results suggest that items intended to measure three types of discounting were likely testing trait, rather than state, factors, regardless of whether “non-smokers” were included in, or excluded from, the analyses (probabilistic discounting scale scores being the exception. With these cautions in mind, however, the psychometric properties of the DI appear to be very good.

  17. Surface consistent finite frequency phase corrections

    Kimman, W. P.


    Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large

  18. A meta-analysis of brain mechanisms of placebo analgesia: consistent findings and unanswered questions.

    Atlas, Lauren Y; Wager, Tor D


    Placebo treatments reliably reduce pain in the clinic and in the lab. Because pain is a subjective experience, it has been difficult to determine whether placebo analgesia is clinically relevant. Neuroimaging studies of placebo analgesia provide objective evidence of placebo-induced changes in brain processing and allow researchers to isolate the mechanisms underlying placebo-based pain reduction. We conducted formal meta-analyses of 25 neuroimaging studies of placebo analgesia and expectancy-based pain modulation. Results revealed that placebo effects and expectations for reduced pain elicit reliable reductions in activation during noxious stimulation in regions often associated with pain processing, including the dorsal anterior cingulate, thalamus, and insula. In addition, we observed consistent reductions during painful stimulation in the amygdala and striatum, regions implicated widely in studies of affect and valuation. This suggests that placebo effects are strongest on brain regions traditionally associated with not only pain, but also emotion and value more generally. Other brain regions showed reliable increases in activation with expectations for reduced pain. These included the prefrontal cortex (including dorsolateral, ventromedial, and orbitofrontal cortices), the midbrain surrounding the periaqueductal gray, and the rostral anterior cingulate. We discuss implications of these findings as well as how future studies can expand our understanding of the precise functional contributions of the brain systems identified here.

  19. Increased Reliability of Gas Turbine Components by Robust Coatings Manufacturing

    Sharma, A.; Dudykevych, T.; Sansom, D.; Subramanian, R.


    The expanding operational windows of the advanced gas turbine components demand increasing performance capability from protective coating systems. This demand has led to the development of novel multi-functional, multi-materials coating system architectures over the last years. In addition, the increasing dependency of components exposed to extreme environment on protective coatings results in more severe penalties, in case of a coating system failure. This emphasizes that reliability and consistency of protective coating systems are equally important to their superior performance. By means of examples, this paper describes the effects of scatter in the material properties resulting from manufacturing variations on coating life predictions. A strong foundation in process-property-performance correlations as well as regular monitoring and control of the coating process is essential for robust and well-controlled coating process. Proprietary and/or commercially available diagnostic tools can help in achieving these goals, but their usage in industrial setting is still limited. Various key contributors to process variability are briefly discussed along with the limitations of existing process and product control methods. Other aspects that are important for product reliability and consistency in serial manufacturing as well as advanced testing methodologies to simplify and enhance product inspection and improve objectivity are briefly described.

  20. Validating and Investigating Reliability of Comprehensive Feeding Practices Questionnaire

    Saeid Doaei


    Full Text Available Background: The present research aims to validate and discuss the reliability of Comprehensive Feeding Practices Questionnaire (CFPQ. Materials and Methods: In this cross-sectional study, 150 mothers with 3-6 year old children in the city of Rasht were selected through cluster random sampling from the public and private kindergartens in 2010. After being confident about the translation validity, the degree of validation (content and structure and validity (test-retest reliability and internal consistency of the questionnaire was examined. Results: The degree of validation of questionnaire content, except questions 2, 16 and 46, was at a high level and these three questions were omitted. The method of the consistency of factors and total scores of the questionnaire was used to study the validation of structure, which was satisfactory and varied between 0.30-0.72. The validity of questionnaire was examined through test-retest and Cronbach's alpha methods. The Intraclass Correlation Coefficient (ICC was between 0.80-0.91 and Cronbach's alpha was between 0.80-0.90. Conclusion: In general, the Comprehensive Feeding Practices Questionnaire (CFPQ was proved to be valid and with respect to the results obtained from the present research, it can be used in the research on child diet.

  1. Reliabilities of genomic estimated breeding values in Danish Jersey

    Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng;


    In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...... index pre-selection only. Averaged across traits, the estimates of reliability of DGVs ranged from 0.20 for validation on the most recent 3 years of bulls and up to 0.42 for expected reliabilities. Reliabilities from the cross-validation were on average 0.24. For the individual traits, the reliability...

  2. Force Concept Inventory-based multiple-choice test for investigating students’ representational consistency

    Pasi Nieminen


    Full Text Available This study investigates students’ ability to interpret multiple representations consistently (i.e., representational consistency in the context of the force concept. For this purpose we developed the Representational Variant of the Force Concept Inventory (R-FCI, which makes use of nine items from the 1995 version of the Force Concept Inventory (FCI. These original FCI items were redesigned using various representations (such as motion map, vectorial and graphical, yielding 27 multiple-choice items concerning four central concepts underpinning the force concept: Newton’s first, second, and third laws, and gravitation. We provide some evidence for the validity and reliability of the R-FCI; this analysis is limited to the student population of one Finnish high school. The students took the R-FCI at the beginning and at the end of their first high school physics course. We found that students’ (n=168 representational consistency (whether scientifically correct or not varied considerably depending on the concept. On average, representational consistency and scientifically correct understanding increased during the instruction, although in the post-test only a few students performed consistently both in terms of representations and scientifically correct understanding. We also compared students’ (n=87 results of the R-FCI and the FCI, and found that they correlated quite well.

  3. Validity and Reliability of Farsi Version of Youth Sport Environment Questionnaire

    Mohammad Ali Eshghi


    Full Text Available The Youth Sport Environment Questionnaire (YSEQ had been developed from Group Environment Questionnaire, a well-known measure of team cohesion. The aim of this study was to adapt and examine the reliability and validity of the Farsi version of the YSEQ. This version was completed by 455 athletes aged 13–17 years. Results of confirmatory factor analysis indicated that two-factor solution showed a good fit to the data. The results also revealed that the Farsi YSEQ showed high internal consistency, test-retest reliability, and good concurrent validity. This study indicated that the Farsi version of the YSEQ is a valid and reliable measure to assess team cohesion in sport setting.

  4. Analysis on testing and operational reliability of software

    ZHAO Jing; LIU Hong-wei; CUI Gang; WANG Hui-qiang


    Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences of the testing phase and the operational phase, using the concept of operational reliability and the testing reliability, different forms of the comparison between the operational failure ratio and the predicted testing failure ratio were conducted, and the mathematical discussion and analysis were performed in detail. Finally, software optimal release was studied using software failure data. The results show that two kinds of conclusions can be derived by applying this method, one conclusion is to continue testing to meet the required reliability level of users, and the other is that testing stops when the required operational reliability is met, thus the testing cost can be reduced.

  5. Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed

    Kottner, Jan; Audigé, Laurent; Brorson, Stig;


    Results of reliability and agreement studies are intended to provide information about the amount of error inherent in any diagnosis, score, or measurement. The level of reliability and agreement among users of scales, instruments, or classifications is widely unknown. Therefore, there is a need...... for rigorously conducted interrater and intrarater reliability and agreement studies. Information about sample selection, study design, and statistical analysis is often incomplete. Because of inadequate reporting, interpretation and synthesis of study results are often difficult. Widely accepted criteria......, standards, or guidelines for reporting reliability and agreement in the health care and medical field are lacking. The objective was to develop guidelines for reporting reliability and agreement studies....

  6. A Bayesian Framework for Reliability Analysis of Spacecraft Deployments

    Evans, John W.; Gallo, Luis; Kaminsky, Mark


    Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.

  7. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps.

    Varikuti, Deepthi P; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T; Eickhoff, Simon B


    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that gray matter masking improved the reliability of connectivity estimates, whereas denoising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources.

  8. Credibility and consistency earn users' trust.

    Stella, Nita


    When searching for health information on the Web, users are primarily concerned with a site's credibility. But the focus group work I and my colleagues have done at Consumer Health Interactive indicates that encouraging health information seekers to return to your site requires more than reliable information. The site must also give the appearance of being authoritative and trustworthy.

  9. Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series

    Gao, Jiti; Kanaya, Shin; Li, Degui


    This paper establishes uniform consistency results for nonparametric kernel density and regression estimators when time series regressors concerned are nonstationary null recurrent Markov chains. Under suitable regularity conditions, we derive uniform convergence rates of the estimators. Our...... results can be viewed as a nonstationary extension of some well-known uniform consistency results for stationary time series....

  10. The reliability of two visual motor integration tests used with healthy adults.

    Brown, Ted; Chinner, Alexandra; Stagnitti, Karen


    ABSTRACT Occupational therapists often assess the visual motor integration (VMI) skills of children, adults, and the elderly, which are parts of the Body Functions and Structures of the International Classification of Functioning, Disability and Health. Objective. As it is imperative that therapists use tests and measures with strong psychometric properties, this study aims to examine the reliability of two VMI tests used with adults. Method. Sixty-one healthy adults, 18 males and 43 females, with an average age of 31.82 years, completed the Developmental Test of Visual Motor Integration (DTVMI) and the Full Range Test of Visual Motor Integration (FRTVMI). The Cronbach's alpha coefficient was used to examine the tests' internal consistency, while the Spearman's rho correlation was used to evaluate the test-retest reliability, intrarater reliability, and interrater reliability of the two VMI tests. Results. The Cronbach's alpha coefficient for the DTVMI and FRTVMI was 0.66 and 0.80, respectively. The test-retest reliability coefficient was 0.77 (p VMI tests appear to exhibit reasonable levels of reliability and are recommended for use with adults and the elderly.

  11. Interrater and Intrarater Reliability of the Tuck Jump Assessment by Health Professionals of Varied Educational Backgrounds

    Lisa A. Dudley


    Full Text Available Objective. The Tuck Jump Assessment (TJA, a clinical plyometric assessment, identifies 10 jumping and landing technique flaws. The study objective was to investigate TJA interrater and intrarater reliability with raters of different educational and clinical backgrounds. Methods. 40 participants were video recorded performing the TJA using published protocol and instructions. Five raters of varied educational and clinical backgrounds scored the TJA. Each score of the 10 technique flaws was summed for the total TJA score. Approximately one month later, 3 raters scored the videos again. Intraclass correlation coefficients determined interrater (5 and 3 raters for first and second session, resp. and intrarater (3 raters reliability. Results. Interrater reliability with 5 raters was poor (ICC = 0.47; 95% confidence intervals (CI 0.33–0.62. Interrater reliability between 3 raters who completed 2 scoring sessions improved from 0.52 (95% CI 0.35–0.68 for session one to 0.69 (95% CI 0.55–0.81 for session two. Intrarater reliability was poor to moderate, ranging from 0.44 (95% CI 0.22–0.68 to 0.72 (95% CI 0.55–0.84. Conclusion. Published protocol and training of raters were insufficient to allow consistent TJA scoring. There may be a learned effect with the TJA since interrater reliability improved with repetition. TJA instructions and training should be modified and enhanced before clinical implementation.

  12. On the consistent use of Constructed Observables

    Trott, Michael


    We define "constructed observables" as relating experimental measurements to terms in a Lagrangian while simultaneously making assumptions about possible deviations from the Standard Model (SM), in other Lagrangian terms. Ensuring that the SM effective field theory (EFT) is constrained correctly when using constructed observables requires that their defining conditions are imposed on the EFT in a manner that is consistent with the equations of motion. Failing to do so can result in a "functionally redundant" operator basis and the wrong expectation as to how experimental quantities are related in the EFT. We illustrate the issues involved considering the $\\rm S$ parameter and the off shell triple gauge coupling (TGC) verticies. We show that the relationships between $h \\rightarrow V \\bar{f} \\, f$ decay and the off shell TGC verticies are subject to these subtleties, and how the connections between these observables vanish in the limit of strong bounds due to LEP. The challenge of using constructed observables...

  13. Consistently weighted measures for complex network topologies

    Heitzig, Jobst; Zou, Yong; Marwan, Norbert; Kurths, Jürgen


    When network and graph theory are used in the study of complex systems, a typically finite set of nodes of the network under consideration is frequently either explicitly or implicitly considered representative of a much larger finite or infinite set of objects of interest. The selection procedure, e.g., formation of a subset or some kind of discretization or aggregation, typically results in individual nodes of the studied network representing quite differently sized parts of the domain of interest. This heterogeneity may induce substantial bias and artifacts in derived network statistics. To avoid this bias, we propose an axiomatic scheme based on the idea of {\\em node splitting invariance} to derive consistently weighted variants of various commonly used statistical network measures. The practical relevance and applicability of our approach is demonstrated for a number of example networks from different fields of research, and is shown to be of fundamental importance in particular in the study of climate n...

  14. Reliability of automotive and mechanical engineering. Determination of component and system reliability

    Bertsche, Bernd [Stuttgart Univ. (Germany). Inst. fuer Maschinenelemente


    In the present contemporary climate of global competition in every branch of engineering and manufacture it has been shown from extensive customer surveys that above every other attribute, reliability stands as the most desired feature in a finished product. To survive this relentless fight for survival any organisation, which neglect the plea of attaining to excellence in reliability, will do so at a serious cost Reliability in Automotive and Mechanical Engineering draws together a wide spectrum of diverse and relevant applications and analyses on reliability engineering. This is distilled into this attractive and well documented volume and practising engineers are challenged with the formidable task of simultaneously improving reliability and reducing the costs and down-time due to maintenance. The volume brings together eleven chapters to highlight the importance of the interrelated reliability and maintenance disciplines. They represent the development trends and progress resulting in making this book essential basic material for all research academics, planners maintenance executives, who have the responsibility to implement the findings and maintenance audits into a cohesive reliability policy. Although, the book is centred on automotive engineering nevertheless, the examples and overall treatise can be applied to a wide range of professional practices. The book will be a valuable source of information for those concerned with improved manufacturing performance and the formidable task of optimising reliability. (orig.)

  15. Thermodynamically consistent model calibration in chemical kinetics

    Goutsias John


    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  16. Reliability Analysis of High Rockfill Dam Stability

    Ping Yi


    Full Text Available A program 3DSTAB combining slope stability analysis and reliability analysis is developed and validated. In this program, the limit equilibrium method is utilized to calculate safety factors of critical slip surfaces. The first-order reliability method is used to compute reliability indexes corresponding to critical probabilistic surfaces. When derivatives of the performance function are calculated by finite difference method, the previous iteration’s critical slip surface is saved and used. This sequential approximation strategy notably improves efficiency. Using this program, the stability reliability analyses of concrete faced rockfill dams and earth core rockfill dams with different heights and different slope ratios are performed. The results show that both safety factors and reliability indexes decrease as the dam’s slope increases at a constant height and as the dam’s height increases at a constant slope. They decrease dramatically as the dam height increases from 100 m to 200 m while they decrease slowly once the dam height exceeds 250 m, which deserves attention. Additionally, both safety factors and reliability indexes of the upstream slope of earth core rockfill dams are higher than that of the downstream slope. Thus, the downstream slope stability is the key failure mode for earth core rockfill dams.

  17. Effect of Maintenance on Computer Network Reliability

    Rima Oudjedi Damerdji


    Full Text Available At the time of the new information technologies, computer networks are inescapable in any large organization, where they are organized so as to form powerful internal means of communication. In a context of dependability, the reliability parameter proves to be fundamental to evaluate the performances of such systems. In this paper, we study the reliability evaluation of a real computer network, through three reliability models. The computer network considered (set of PCs and server interconnected is localized in a company established in the west of Algeria and dedicated to the production of ammonia and fertilizers. The result permits to compare between the three models to determine the most appropriate reliability model to the studied network, and thus, contribute to improving the quality of the network. In order to anticipate system failures as well as improve the reliability and availability of the latter, we must put in place a policy of adequate and effective maintenance based on a new model of the most common competing risks in maintenance, Alert-Delay model. At the end, dependability measures such as MTBF and reliability are calculated to assess the effectiveness of maintenance strategies and thus, validate the alert delay model.

  18. Analysis of the consistency of the detection results of glycosylated hemoglobin (HbA1c) by SMART POCT and hitachi 7180 biochemical analyzers%SM ART POCT 与日立7180生化仪检测糖化血红蛋白的一致性分析

    田建红; 汪屹; 贾江花; 章玉胜


    目的:探讨SMART POCT和日立7180检测糖化血红蛋白(HbA1c)的结果相关性。方法首先分别对SMART POCT和日立7180生化仪检测 HbA1c进行精密度评价,再随机选取20份样本在两台仪器上分别进行测定,对测定结果进行相关性分析。结果两台仪器的精密度高,所测结果的相关性好,相关系数(r2)为0.9811。结论 SMART POCT 和日立7180检测HbA1c具有很高的一致性。%Objective To analyze the consistency of the detection results of glycosylated hemoglobin (HbA1c) by SMART POCT and Hitachi 7180 biochemical analyzers .Methods Firstly the accuracy of HbA1c detected by the SMART POCT and Hita‐chi 7180 biochemical analyzers were evaluated .And then 20 randomly selected samples were tested by both equipments .A relevance analysis between detected data from two equipments was conducted .Results There was a good correlation between the results of the HbA1c detected by two detecting systems ,with the correlation coefficient of r2 =0 .981 1 .Conclusion There is a good consis‐tency in detection results of glycosylated hemoglobin (HbA1c) detected by SM ART POCT and Hitachi 7180 biochemical analyzer .

  19. [Factorial structure and reliability of Fisher, King & Tague's self-directed learning readiness scale in Chilean medical students].

    Fasce H, Eduardo; Pérez V, Cristhian; Ortiz M, Liliana; Parra P, Paula; Matus B, Olga


    Continuous education is crucial among physicians. Therefore, medical schools must teach self-directed learning skills to their students. To evaluate the factorial structure and reliability of the Spanish version of the Self-Directed Learning Readiness Scale of Fisher, King & Tague, applied to medical students. The survey was answered by 330 students aged between 17 and 26years (58% men, with 10 missing cases). Factorial structure, internal reliability and temporary stability of scale was evaluated. The Exploratory Factorial Analysis, conducted using a principal factor method, identified five factors in the structure of the survey. Internal consistency was adequate with a Cronbach's alpha between 0.66 and 0.88. Test retest reliability, comparing the results of the survey applied six months after the first application, showed Pearson correlation coefficients that fluctuated between 0.399 and 0.68. These results show a defined factorial structure with adequate reliability of the survey.

  20. Using Economic Experiments to Test the Effect of Reliability Pricing and Self-Sizing on the Private Provision of a Public Good Results: The Case of Constructing Water Conveyance Infrastructure to Mitigate Water Quantity and Quality Concerns in the Sacramento-San Joaquin Delta

    Kaplan, J.; Howitt, R. E.; Kroll, S.


    Public financing of public projects is becoming more difficult with growing political and financial pressure to reduce the size and scope of government action. Private provision is possible but is often doomed by under-provision. If however, market-like mechanisms could be incorporated into the solicitation of funds to finance the provision of the good, because, for example, the good is supplied stochastically and is divisible, then we would expect fewer incentives to free ride and greater efficiency in providing the public good. In a controlled computer-based economic experiment, we evaluate two market-like conditions (reliability pricing allocation and self-sizing of the good) that are designed to reduce under-provision. The results suggest that financing an infrastructure project when the delivery is allocated based on reliability pricing rather than historical allocation results in significantly greater price formation efficiency and less free riding whether the project is of a fixed size determined by external policy makers or determined endogenously by the sum of private contributions. When reliability pricing and self-sizing (endogenous) mechanism are used in combination free-riding is reduced the greatest among the tested treatments. Furthermore, and as expected, self-sizing when combined with historical allocations results in the worst level of free-riding. This setting for this treatment creates an incentive to undervalue willingness to pay since very low contributions still return positive earnings as long as enough contributions are raised for a single unit. If everyone perceives everyone else is undervaluing their contribution the incentive grows stronger and we see the greatest degree of free riding among the treatments. Lastly, the results from the analysis suggested that the rebate rule may have encouraged those with willingness to pay values less than the cost of the project to feel confident when contributing more than their willingness to pay and