WorldWideScience

Sample records for providing reliable estimates

  1. High-density surface electromyography provides reliable estimates of motor unit behavior.

    Science.gov (United States)

    Martinez-Valdes, E; Laine, C M; Falla, D; Mayer, F; Farina, D

    2016-06-01

    To assess the intra- and inter-session reliability of estimates of motor unit behavior and muscle fiber properties derived from high-density surface electromyography (HDEMG). Ten healthy subjects performed submaximal isometric knee extensions during three recording sessions (separate days) at 10%, 30%, 50% and 70% of their maximum voluntary effort. The discharge timings of motor units of the vastus lateralis and medialis muscles were automatically identified from HDEMG by a decomposition algorithm. We characterized the number of detected motor units, their discharge rates, the coefficient of variation of their inter-spike intervals (CoVisi), the action potential conduction velocity and peak-to-peak amplitude. Reliability was assessed for each motor unit characteristics by intra-class correlation coefficient (ICC). Additionally, a pulse-to-noise ratio (PNR) was calculated, to verify the accuracy of the decomposition. Good to excellent reliability within and between sessions was found for all motor unit characteristics at all force levels (ICCs>0.8), with the exception of CoVisi that presented poor reliability (ICC95%). Motor unit features can be assessed non-invasively and reliably within and across sessions over a wide range of force levels. These results suggest that it is possible to characterize motor units in longitudinal intervention studies. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Can high resolution 3D topographic surveys provide reliable grain size estimates in gravel bed rivers?

    Science.gov (United States)

    Pearson, E.; Smith, M. W.; Klaar, M. J.; Brown, L. E.

    2017-09-01

    High resolution topographic surveys such as those provided by Structure-from-Motion (SfM) contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-metre scale topographic variability (or 'surface roughness') to sediment grain size by deriving empirical relationships between the two. In fluvial applications, such relationships permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing improved data to drive three-dimensional hydraulic models, allowing rapid geomorphic monitoring of sub-reach river restoration projects, and enabling more robust characterisation of riverbed habitats. However, comparison of previously published roughness-grain-size relationships shows substantial variability between field sites. Using a combination of over 300 laboratory and field-based SfM surveys, we demonstrate the influence of inherent survey error, irregularity of natural gravels, particle shape, grain packing structure, sorting, and form roughness on roughness-grain-size relationships. Roughness analysis from SfM datasets can accurately predict the diameter of smooth hemispheres, though natural, irregular gravels result in a higher roughness value for a given diameter and different grain shapes yield different relationships. A suite of empirical relationships is presented as a decision tree which improves predictions of grain size. By accounting for differences in patch facies, large improvements in D50 prediction are possible. SfM is capable of providing accurate grain size estimates, although further refinement is needed for poorly sorted gravel patches, for which c-axis percentiles are better predicted than b-axis percentiles.

  3. Line Transect and Triangulation Surveys Provide Reliable Estimates of the Density of Kloss' Gibbons (Hylobates klossii) on Siberut Island, Indonesia.

    Science.gov (United States)

    Höing, Andrea; Quinten, Marcel C; Indrawati, Yohana Maria; Cheyne, Susan M; Waltert, Matthias

    2013-02-01

    Estimating population densities of key species is crucial for many conservation programs. Density estimates provide baseline data and enable monitoring of population size. Several different survey methods are available, and the choice of method depends on the species and study aims. Few studies have compared the accuracy and efficiency of different survey methods for large mammals, particularly for primates. Here we compare estimates of density and abundance of Kloss' gibbons (Hylobates klossii) using two of the most common survey methods: line transect distance sampling and triangulation. Line transect surveys (survey effort: 155.5 km) produced a total of 101 auditory and visual encounters and a density estimate of 5.5 gibbon clusters (groups or subgroups of primate social units)/km(2). Triangulation conducted from 12 listening posts during the same period revealed a similar density estimate of 5.0 clusters/km(2). Coefficients of variation of cluster density estimates were slightly higher from triangulation (0.24) than from line transects (0.17), resulting in a lack of precision in detecting changes in cluster densities of triangulation and triangulation method also may be appropriate.

  4. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper...

  5. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...

  6. Reliable Function Approximation and Estimation

    Science.gov (United States)

    2016-08-16

    scope of applications . During the tenure of this award, as anticipated, the PI developed a range of reliable and structure-aware sampling theorems based ...geometric mean inequality for products of three matrices . A Israel, F Krahmer, and R Ward. Linear Algebra and its Applications 488, 2016. 1-12. (O3...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments

  7. Estimating Reliability with Discrete Growth Models.

    Science.gov (United States)

    1988-03-01

    assoclated -~ with this es timate t emained high at a relatil clV conlStant value oC (1. 2o. So, %0hi,2 the mecan estimate of svsteni reliability was very...VerilVing high reliability in- herently requires sigificantlv more testing than does verifying relatively moderate reli- abilitv since items are tested...limitations. This limitation typically manifiested itself at relatively high systcni reliability. If a hailure cause with a very low probability of

  8. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  9. Estimating Examination Failure Rates and Reliability Prior to Administration.

    Science.gov (United States)

    McIntosh, Vergil M.

    Using estimates of item ease and item discrimination, procedures are provided for computing estimates of the reliability and percentage of failing scores for tests assembled from these items. Two assumptions are made: that the average item coefficient will be approximately equal to the average of the estimated coefficients and that the score…

  10. Reliability estimation for single dichotomous items

    NARCIS (Netherlands)

    Meijer, R.R.; Sijtsma, Klaas; Molenaar, Ivo W.

    1994-01-01

    Three methods for the estimation of the reliability of single dichotomous items are discussed. All methods are based on the assumptions of nondecreasing and nonintersecting item response functions and the Mokken model of double monotonicity. Based on analytical and Monte Carlo studies, it is

  11. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...... determined from central composite designs. In a two phase algorithm the second order surface is adjusted to the domain of the most likely failure point and both FORM and SORM estimates are obtained. The algorithm is implemented as a safeguard algorithm so non-converged solutions are avoided. Furthermore...

  12. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    Science.gov (United States)

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  13. Reliability Estimation for Double Containment Piping

    Energy Technology Data Exchange (ETDEWEB)

    L. Cadwallader; T. Pinna

    2012-08-01

    Double walled or double containment piping is considered for use in the ITER international project and other next-generation fusion device designs to provide an extra barrier for tritium gas and other radioactive materials. The extra barrier improves confinement of these materials and enhances safety of the facility. This paper describes some of the design challenges in designing double containment piping systems. There is also a brief review of a few operating experiences of double walled piping used with hazardous chemicals in different industries. This paper recommends approaches for the reliability analyst to use to quantify leakage from a double containment piping system in conceptual and more advanced designs. The paper also cites quantitative data that can be used to support such reliability analyses.

  14. Data Service Provider Cost Estimation Tool

    Science.gov (United States)

    Fontaine, Kathy; Hunolt, Greg; Booth, Arthur L.; Banks, Mel

    2011-01-01

    The Data Service Provider Cost Estimation Tool (CET) and Comparables Database (CDB) package provides to NASA s Earth Science Enterprise (ESE) the ability to estimate the full range of year-by-year lifecycle cost estimates for the implementation and operation of data service providers required by ESE to support its science and applications programs. The CET can make estimates dealing with staffing costs, supplies, facility costs, network services, hardware and maintenance, commercial off-the-shelf (COTS) software licenses, software development and sustaining engineering, and the changes in costs that result from changes in workload. Data Service Providers may be stand-alone or embedded in flight projects, field campaigns, research or applications projects, or other activities. The CET and CDB package employs a cost-estimation-by-analogy approach. It is based on a new, general data service provider reference model that provides a framework for construction of a database by describing existing data service providers that are analogs (or comparables) to planned, new ESE data service providers. The CET implements the staff effort and cost estimation algorithms that access the CDB and generates the lifecycle cost estimate for a new data services provider. This data creates a common basis for an ESE proposal evaluator for considering projected data service provider costs.

  15. Hardware and software reliability estimation using simulations

    Science.gov (United States)

    Swern, Frederic L.

    1994-01-01

    The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.

  16. Bring Your Own Device - Providing Reliable Model of Data Access

    Directory of Open Access Journals (Sweden)

    Stąpór Paweł

    2016-10-01

    Full Text Available The article presents a model of Bring Your Own Device (BYOD as a model network, which provides the user reliable access to network resources. BYOD is a model dynamically developing, which can be applied in many areas. Research network has been launched in order to carry out the test, in which as a service of BYOD model Work Folders service was used. This service allows the user to synchronize files between the device and the server. An access to the network is completed through the wireless communication by the 802.11n standard. Obtained results are shown and analyzed in this article.

  17. Estimation of the Reliability of Plastic Slabs

    DEFF Research Database (Denmark)

    Pirzada, G. B. : Ph.D.

    In this thesis, work related to fundamental conditions has been extended to non-fundamental or the general case of probabilistic analysis. Finally, using the ss-unzipping technique a door has been opened to system reliability analysis of plastic slabs. An attempt has been made in this thesis...... to give a probabilistic treatment of plastic slabs which is parallel to the deterministic and systematic treatment of plastic slabs by Nielsen (3). The fundamental reason is that in Nielsen (3) the treatment is based on a deterministic modelling of the basic material properties for the reinforced...

  18. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Science.gov (United States)

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  19. Stability of Three-Wave Simplex Estimates of Reliability.

    Science.gov (United States)

    Coenders, Germa; Saris, Willem E.; Batista-Foguet, Joan M.; Andreenkova, Anna

    1999-01-01

    Illustrates that sampling variance can be very large when a three-wave quasi simplex model is used to obtain reliability estimates. Also shows that, for the reliability parameter to be identified, the model assumes a Markov process. These problems are evaluated with both real and Monte Carlo data. (SLD)

  20. A Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  1. IRT-Estimated Reliability for Tests Containing Mixed Item Formats

    Science.gov (United States)

    Shu, Lianghua; Schwarz, Richard D.

    2014-01-01

    As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…

  2. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    Science.gov (United States)

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  3. Reliabilities of genomic estimated breeding values in Danish Jersey

    DEFF Research Database (Denmark)

    Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng

    2012-01-01

    In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...

  4. PROVIDING RELIABILITY OF HUMAN RESOURCES IN PRODUCTION MANAGEMENT PROCESS

    Directory of Open Access Journals (Sweden)

    Anna MAZUR

    2014-07-01

    Full Text Available People are the most valuable asset of an organization and the results of a company mostly depends on them. The human factor can also be a weak link in the company and cause of the high risk for many of the processes. Reliability of the human factor in the process of the manufacturing process will depend on many factors. The authors include aspects of human error, safety culture, knowledge, communication skills, teamwork and leadership role in the developed model of reliability of human resources in the management of the production process. Based on the case study and the results of research and observation of the author present risk areas defined in a specific manufacturing process and the results of evaluation of the reliability of human resources in the process.

  5. Evaluating the reliability of point estimates of wetland reference evaporation

    Directory of Open Access Journals (Sweden)

    H. Gavin

    2003-01-01

    Full Text Available The Penman-Monteith formulation of evaporation has been criticised for its reliance upon point estimates so that areal estimates of wetland evaporation based upon single weather stations may be misleading. Typically, wetlands comprise a complex mosaic of land cover types from each of which evaporative rates may differ. The need to account for wetland patches when monitoring hydrological fluxes has been noted. This paper presents work carried out over a wet grassland in Southern England. The significance of fetch on actual evaporation was examined using the approach adopted by Gash (1986 based upon surface roughness to estimate the fraction of evaporation sensed from a specified distance upwind of the monitoring station. This theoretical analysis (assuming near-neutral conditions reveals that the fraction of evaporation contributed by the surrounding area increases steadily to a value of 77% at a distance of 224 m and thereafter declines rapidly. Thus, point climate observations may not reflect surface conditions at greater distances. This result was tested through the deployment of four weather stations on the wetland. The resultant data suggested that homogeneous conditions prevailed so that the central weather station provided reliable areal estimates of reference evaporation during the observation period March–April 1999. This may be a result of not accounting for high wind speeds and roughness found in wetlands that lead to widespread atmospheric mixing. It should be noted this analysis was based upon data collected during the period March-April when wind direction was constant (westerly and the land surface was moist. There could be more variation at other times of the year that would lead to greater heterogeneity in actual evaporation. Keywords: evaporation, Penman-Monteith, automatic weather station, fetch, wetland

  6. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    Science.gov (United States)

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  7. Fire Extinguisher Control System Provides Reliable Cold Weather Operation

    Science.gov (United States)

    Branum, J. C.

    1967-01-01

    Fast acting, pneumatically and centrally controlled, fire extinguisher /firex/ system is effective in freezing climates. The easy-to-operate system provides a fail-dry function which is activated by an electrical power failure.

  8. Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.

    Science.gov (United States)

    Proppe, Jonny; Reiher, Markus

    2017-07-11

    One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the 57Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s-1 and 0.04-0.05 mm s-1, respectively, the latter being close to the average experimental uncertainty of 0.02 mm s-1. Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r2, or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical Mössbauer spectroscopy

  9. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    Directory of Open Access Journals (Sweden)

    Michael A. Guthrie

    2013-01-01

    Full Text Available limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment. For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.

  10. How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?

    Science.gov (United States)

    Short, Michelle A; Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A

    2017-03-01

    To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test-retest reliability of sleep diary estimates of school night sleep across 12 weeks. Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test-retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test-rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks.

  11. Availability and Reliability of FSO Links Estimated from Visibility

    Directory of Open Access Journals (Sweden)

    M. Tatarko

    2012-06-01

    Full Text Available This paper is focused on estimation availability and reliability of FSO systems. Shortcut FSO means Free Space Optics. It is a system which allows optical transmission between two steady points. We can say that it is a last mile communication system. It is an optical communication system, but the propagation media is air. This solution of last mile does not require expensive optical fiber and establishing of connection is very simple. But there are some drawbacks which have a bad influence of quality of services and availability of the link. Number of phenomena in the atmosphere such as scattering, absorption and turbulence cause a large variation of receiving optical power and laser beam attenuation. The influence of absorption and turbulence can be significantly reduced by an appropriate design of FSO link. But the visibility has the main influence on quality of the optical transmission channel. Thus, in typical continental area where rain, snow or fog occurs is important to know their values. This article gives a description of device for measuring weather conditions and information about estimation of availability and reliability of FSO links in Slovakia.

  12. Integration of external estimated breeding values and associated reliabilities using correlations among traits and effects

    NARCIS (Netherlands)

    Vandenplas, J.; Colinet, F.G.; Glorieux, G.; Bertozzi, C.; Gengler, N.

    2015-01-01

    Based on a Bayesian view of linear mixed models, several studies showed the possibilities to integrate estimated breeding values (EBV) and associated reliabilities (REL) provided by genetic evaluations performed outside a given evaluation system into this genetic evaluation. Hereafter, the term

  13. Safeprops: A Software for Fast and Reliable Estimation of Safety and Environmental Properties for Organic Compounds

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens

    We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...

  14. Engineer’s estimate reliability and statistical characteristics of bids

    Directory of Open Access Journals (Sweden)

    Fariborz M. Tehrani

    2016-12-01

    Full Text Available The objective of this report is to provide a methodology for examining bids and evaluating the performance of engineer’s estimates in capturing the true cost of projects. This study reviews the cost development for transportation projects in addition to two sources of uncertainties in a cost estimate, including modeling errors and inherent variability. Sample projects are highway maintenance projects with a similar scope of the work, size, and schedule. Statistical analysis of engineering estimates and bids examines the adaptability of statistical models for sample projects. Further, the variation of engineering cost estimates from inception to implementation has been presented and discussed for selected projects. Moreover, the applicability of extreme values theory is assessed for available data. The results indicate that the performance of engineer’s estimate is best evaluated based on trimmed average of bids, excluding discordant bids.

  15. Estimating the Reliability of Electronic Parts in High Radiation Fields

    Science.gov (United States)

    Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd

    2008-01-01

    Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.

  16. Estimated Value of Service Reliability for Electric Utility Customers in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, M.J.; Mercurio, Matthew; Schellenberg, Josh

    2009-06-01

    Information on the value of reliable electricity service can be used to assess the economic efficiency of investments in generation, transmission and distribution systems, to strategically target investments to customer segments that receive the most benefit from system improvements, and to numerically quantify the risk associated with different operating, planning and investment strategies. This paper summarizes research designed to provide estimates of the value of service reliability for electricity customers in the US. These estimates were obtained by analyzing the results from 28 customer value of service reliability studies conducted by 10 major US electric utilities over the 16 year period from 1989 to 2005. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods it was possible to integrate their results into a single meta-database describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the US for industrial, commercial, and residential customers. Estimated interruption costs for different types of customers and of different duration are provided. Finally, additional research and development designed to expand the usefulness of this powerful database and analysis are suggested.

  17. Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.

    Directory of Open Access Journals (Sweden)

    Arjan C ter Horst

    Full Text Available When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.

  18. Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.

    Science.gov (United States)

    ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter

    2015-01-01

    When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.

  19. Reliability of fish size estimates obtained from multibeam imaging sonar

    Science.gov (United States)

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of

  20. Estimate of the Reliability in Geological Forecasts for Tunnels: Toward a Structured Approach

    Science.gov (United States)

    Perello, Paolo

    2011-11-01

    In tunnelling, a reliable geological model often allows providing an effective design and facing the construction phase without unpleasant surprises. A geological model can be considered reliable when it is a valid support to correctly foresee the rock mass behaviour, therefore preventing unexpected events during the excavation. The higher the model reliability, the lower the probability of unforeseen rock mass behaviour. Unfortunately, owing to different reasons, geological models are affected by uncertainties and a fully reliable knowledge of the rock mass is, in most cases, impossible. Therefore, estimating to which degree a geological model is reliable, becomes a primary requirement in order to save time and money and to adopt the appropriate construction strategy. The definition of the geological model reliability is often achieved by engineering geologists through an unstructured analytical process and variable criteria. This paper focusses on geological models for projects of linear underground structures and represents an effort to analyse and include in a conceptual framework the factors influencing such models. An empirical parametric procedure is then developed with the aim of obtaining an index called "geological model rating (GMR)", which can be used to provide a more standardised definition of a geological model reliability.

  1. Is biomass a reliable estimate of plant fitness?

    Science.gov (United States)

    Younginger, Brett S; Sirová, Dagmara; Cruzan, Mitchell B; Ballhorn, Daniel J

    2017-02-01

    The measurement of fitness is critical to biological research. Although the determination of fitness for some organisms may be relatively straightforward under controlled conditions, it is often a difficult or nearly impossible task in nature. Plants are no exception. The potential for long-distance pollen dispersal, likelihood of multiple reproductive events per inflorescence, varying degrees of reproductive growth in perennials, and asexual reproduction all confound accurate fitness measurements. For these reasons, biomass is frequently used as a proxy for plant fitness. However, the suitability of indirect fitness measurements such as plant size is rarely evaluated. This review outlines the important associations between plant performance, fecundity, and fitness. We make a case for the reliability of biomass as an estimate of fitness when comparing conspecifics of the same age class. We reviewed 170 studies on plant fitness and discuss the metrics commonly employed for fitness estimations. We find that biomass or growth rate are frequently used and often positively associated with fecundity, which in turn suggests greater overall fitness. Our results support the utility of biomass as an appropriate surrogate for fitness under many circumstances, and suggest that additional fitness measures should be reported along with biomass or growth rate whenever possible.

  2. Estimating uncertainty and reliability of social network data using Bayesian inference.

    Science.gov (United States)

    Farine, Damien R; Strandburg-Peshkin, Ariana

    2015-09-01

    Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.

  3. Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide

    Science.gov (United States)

    Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.

    2012-01-01

    This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…

  4. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane

    DEFF Research Database (Denmark)

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B.

    2014-01-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the pr...

  5. On Robustness of the Normal-Theory Based Asymptotic Distributions of Three Reliability Coefficient Estimates.

    Science.gov (United States)

    Yuan, Ke-Hai; Bentler, Peter M.

    2002-01-01

    Examined the asymptotic distributions of three reliability coefficient estimates: (1) sample coefficient alpha; (2) reliability estimate of a composite score following factor analysis; and (3) maximal reliability of a linear combination of item scores after factor analysis. Findings show that normal theory based asymptotic distributions for these…

  6. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  7. How Accurately Can Emergency Department Providers Estimate Patient Satisfaction?

    Directory of Open Access Journals (Sweden)

    Lalena M. Yarris

    2012-09-01

    Full Text Available Introduction: Patient satisfaction is an important measure of emergency department (ED quality of care. Little is known about providers’ ability to estimate patient satisfaction. We aimed to measure providers’ ability to assess patient satisfaction and hypothesized that providers could accurately estimate overall patient satisfaction.Methods: We surveyed ED patients regarding satisfaction with their care. Treating providers completed analogous surveys, estimating patients’ responses. Sexual assault victims and non-English-speaking or severely ill patients were excluded. Satisfaction responses were categorized as ‘‘satisfied’’ or ‘‘not satisfied.’’ Patient satisfaction scores were considered the ‘‘gold standard,’’ and providers’ perceptions of the patient satisfaction were considered tests. Measures of diagnosticaccuracy, such as positive predictive value (PPV and sensitivity, were used to assess how accurately the provider could estimate his or her patient’s satisfaction.Results: Here, 242/457 eligible patients (53% completed the survey; 227 providers (94% completed a corresponding survey. Subject-reported overall satisfaction was 96.6%, compared with a provider estimated rate of 94.4%. The sensitivity and PPV of the provider’s estimate of the patient’s satisfaction were 95.2 (95% confidence interval [CI] 91.4, 97.7 and 97.5 (95% CI 94.4, 99.2, respectively, for overall patient satisfaction. The PPV was similar for clarity of communication. The PPV was 78.9 for perceived length of ED stay (99% CI 70.8, 85.6 and 82.6 for quality of pain control (95% CI 68.6, 92.2. Accuracy of attending and resident estimates of patient satisfaction did not differ significantly. The agreement between patient-reported and provider-estimated patient satisfaction was not associated with age, gender, patient disposition, or ED divert status.Conclusion: Providers are able to assess overall patient satisfaction and clarity of

  8. Estimation of reliability of a interleaving PFC boost converter

    Directory of Open Access Journals (Sweden)

    Gulam Amer Sandepudi

    2010-01-01

    Full Text Available Reliability plays an important role in power supplies. For other electronic equipment, a certain failure mode, at least for a part of the total system, can often be employed without serious (critical effects. However, for power supply no such condition can be accepted, since very high demands on its reliability must be achieved. At higher power levels, the continuous conduction mode (CCM boost converter is preferred topology for implementation a front end with PFC. As a result, significant efforts have been made to improve the performance of high boost converter. This paper is one of the efforts for improving the performance of the converter from the reliability point of view. In this paper, interleaving boost power factor correction converter is simulated with single switch in continuous conduction mode (CCM, discontinuous conduction mode (DCM and critical conduction mode (CRM under different output power ratings. Results of the converter are explored from reliability point of view.

  9. Improving Sample Estimate Reliability and Validity with Linked Ego Networks

    CERN Document Server

    Lu, Xin

    2012-01-01

    Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...

  10. Reliability in estimating taurodontism of permanent molars from orthopantomograms.

    Science.gov (United States)

    Tulensalo, T; Ranta, R; Kataja, M

    1989-10-01

    Taurodontism is a morphologic dental trait showing continuous expressivity, and criteria of the degree of pulp chamber elongation vary in different investigations. The aim of this investigation was to test a simple method of assessing taurodontism in the developing dentition from orthopantomograms in order to determine its reliability for later use in epidemiologic investigations. The method was also compared with other methods. Forty-three children 10-16.9 yr of age with one or more taurodontic permanent first or second molars were selected for the study. A subgroup of 16 children with two longitudinally obtained radiographs was used in a follow-up study of the same tooth in two different formational stages. The follow-up time averaged 2.2 yr. The distance between the baseline connecting the mesial and distal points of the cementoenamel junction and the highest point of the floor of the pulp chamber was measured (Measure 3). A tooth was classified as taurodontic when Measure 3 reached or exceeded 3.5 mm. This distance remained unchanged during the course of tooth development. Intraexaminer reliability of two examiners in reproducing the same classification was, on average, 96.2%, and the interexaminer reliability was 93.2%. The reliability was greater for the first than for the second molars. Results obtained by this method agreed well with those obtained by other methods. Measure 3 proved to be reliable in assessing taurodontism in the developing dentition from orthopantomograms in epidemiologic investigations.

  11. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....... various numerical modelling and experimental studies are being carried out to better understand and control the process, there is still a lack of research into establishing the reliability of the process.In this paper, a combined modelling-experimental approach is introduced to establish the reliability...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...

  12. Root biomass in cereals, catch crops and weeds can be reliably estimated without considering aboveground biomass

    DEFF Research Database (Denmark)

    Hu, Teng; Sørensen, Peter; Wahlström, Ellen Margrethe

    2018-01-01

    Reliable information on belowground plant biomass is essential to estimate belowground carbon inputs to soils. Estimations of belowground plant biomass are often based on a fixed allometric relationship of plant biomass between aboveground and belowground parts. However, environmental and managem......Reliable information on belowground plant biomass is essential to estimate belowground carbon inputs to soils. Estimations of belowground plant biomass are often based on a fixed allometric relationship of plant biomass between aboveground and belowground parts. However, environmental......) and cereal species. Shoot and root biomass of catch crops were higher than for weeds (sampled in late autumn), and farming system significantly affected root biomass of catch crops and weeds. The use of fixed root biomass based on the most influential factors (farming system and species) provided the lowest...... error of prediction for estimation of root biomass, compared with the use of fixed allometric relations, such as root/shoot ratio. For cereal crops, the average root dry matter in organic farming systems was 218 g m−2 (243 and 193 g m−2 for wheat and barley, respectively), but in conventional systems...

  13. Generating human reliability estimates using expert judgment. Volume 2. Appendices. [PWR; BWR

    Energy Technology Data Exchange (ETDEWEB)

    Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.

    1984-11-01

    The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessments (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 2 provides detailed procedures for using the techniques, detailed descriptions of the analyses performed to evaluate the techniques, and HEP estimates generated as part of this project. The results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. Judgments were shown to be consistent and to provide HEP estimates with a good degree of convergent validity. Of the two techniques tested, direct numerical estimation appears to be preferable in terms of ease of application and quality of results.

  14. An indirect technique for estimating reliability of analog and mixed-signal systems during operational life

    NARCIS (Netherlands)

    Khan, M.A.; Kerkhoff, Hans G.

    2013-01-01

    Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an

  15. Problems in Estimating Composite Reliability of "Unitised" Assessments

    Science.gov (United States)

    Bramley, Tom; Dhawan, Vikas

    2013-01-01

    This paper discusses the issues involved in calculating indices of composite reliability for "modular" or "unitised" assessments of the kind used in GCSEs, AS and A level examinations in England. The increasingly widespread use of on-screen marking has meant that the item-level data required for calculating indices of…

  16. Standardized Patients Provide a Reliable Assessment of Athletic Training Students' Clinical Skills

    Science.gov (United States)

    Armstrong, Kirk J.; Jarriel, Amanda J.

    2016-01-01

    Context: Providing students reliable objective feedback regarding their clinical performance is of great value for ongoing clinical skill assessment. Since a standardized patient (SP) is trained to consistently portray the case, students can be assessed and receive immediate feedback within the same clinical encounter; however, no research, to our…

  17. The Impact of Process Capability on Service Reliability for Critical Infrastructure Providers

    Science.gov (United States)

    Houston, Clemith J., Jr.

    2013-01-01

    This study investigated the relationship between organizational processes that have been identified as promoting resiliency and their impact on service reliability within the scope of critical infrastructure providers. The importance of critical infrastructure to the nation is evident from the body of research and is supported by instances where…

  18. Bi-Factor Multidimensional Item Response Theory Modeling for Subscores Estimation, Reliability, and Classification

    Science.gov (United States)

    Md Desa, Zairul Nor Deana

    2012-01-01

    In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…

  19. Alternative Estimates of the Reliability of College Grade Point Averages. Professional File. Article 130, Spring 2013

    Science.gov (United States)

    Saupe, Joe L.; Eimers, Mardy T.

    2013-01-01

    The purpose of this paper is to explore differences in the reliabilities of cumulative college grade point averages (GPAs), estimated for unweighted and weighted, one-semester, 1-year, 2-year, and 4-year GPAs. Using cumulative GPAs for a freshman class at a major university, we estimate internal consistency (coefficient alpha) reliabilities for…

  20. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  1. Reliability estimates for selected sensors in fusion applications

    Energy Technology Data Exchange (ETDEWEB)

    Cadwallader, L.C.

    1996-09-01

    This report presents the results of a study to define several types of sensors in use, the qualitative reliability (failure modes) and quantitative reliability (average failure rates) for these types of process sensors. Temperature, pressure, flow, and level sensors are discussed for water coolant and for cryogenic coolants. The failure rates that have been found are useful for risk assessment and safety analysis. Repair times and calibration intervals are also given when found in the literature. All of these values can also be useful to plant operators and maintenance personnel. Designers may be able to make use of these data when planning systems. The final chapter in this report discusses failure rates for several types of personnel safety sensors, including ionizing radiation monitors, toxic and combustible gas detectors, humidity sensors, and magnetic field sensors. These data could be useful to industrial hygienists and other safety professionals when designing or auditing for personnel safety.

  2. Generating human reliability estimates using expert judgment. Volume 1. Main report

    Energy Technology Data Exchange (ETDEWEB)

    Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.

    1984-11-01

    The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report.

  3. Using dynamic factor analysis to provide insights into data reliability in experience sampling studies.

    Science.gov (United States)

    Fuller-Tyszkiewicz, Matthew; Hartley-Clark, Linda; Cummins, Robert A; Tomyn, Adrian J; Weinberg, Melissa K; Richardson, Ben

    2017-09-01

    The past 2 decades have seen increasing use of experience sampling methods (ESMs) to gain insights into the daily experience of affective states (e.g., its variability, as well as antecedents and consequences of temporary shifts in affect). Much less attention has been given to methodological challenges, such as how to ensure reliability of test scores obtained using ESM. The present study demonstrates the use of dynamic factor analysis (DFA) to quantify reliability of test scores in ESM contexts, evaluates the potential impact of unreliable test scores, and seeks to identify characteristics of individuals that may account for their unreliable test scores. One hundred twenty-seven participants completed baseline measures (demographics and personality traits), followed by a 7-day ESM phase in which positive and negative state affect were measured up to 6 times per day. Analyses showed that although at the sample level, scores on these affect measures exhibited adequate levels of reliability, up to one third of participants failed to meet conventional standards of reliability. Where these low reliability estimates were not significantly associated with personality factors, they could-in some cases-be explained by model misspecification where a meaningful alternative structure was available. Despite these potential differences in factor structure across participants, subsequent modeling with and without these "unreliable" cases showed similar substantive results. Hence, the present findings suggest typical analyses based on ESM data may be robust to individual differences in data structure and/or quality. Ways to augment the DFA approach to better understand unreliable cases are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Estimating Value of Congestion and of Reliability from Observation of Route Choice Behavior of Car Drivers

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo; Rasmussen, Thomas Kjær; Nielsen, Otto Anker

    2014-01-01

    In recent years, a consensus has been reached about the relevance of calculating the value of congestion and the value of reliability for better understanding and therefore better prediction of travel behavior. The current study proposed a revealed preference approach that used a large amount...... of GPS data from probe vehicles to provide insight into actual behavior in choosing a route. Mixed path size correction logit models were estimated from samples of 5,759 observations in the peak period and 7,964 observations in the off-peak period, while a mean-variance model was specified to consider...

  5. Light traps fail to estimate reliable malaria mosquito biting rates on Bioko Island, Equatorial Guinea

    Directory of Open Access Journals (Sweden)

    Overgaard Hans J

    2012-02-01

    Full Text Available Abstract Background The human biting rate (HBR, an important parameter for assessing malaria transmission and evaluating vector control interventions, is commonly estimated by human landing collections (HLC. Although intense efforts have been made to find alternative non-exposure mosquito collection methods, HLC remains the standard for providing reliable and consistent HBRs. The aim of this study was to assess the relationship between human landing and light trap collections (LTC, in an attempt to estimate operationally feasible conversion factors between the two. The study was conducted as part of the operational research component of the Bioko Island Malaria Control Project (BIMCP, Equatorial Guinea. Methods Malaria mosquitoes were collected indoors and outdoors by HLCs and LTCs in three villages on Bioko Island, Equatorial Guinea during five bimonthly collections in 2009. Indoor light traps were suspended adjacent to occupied long-lasting, insecticide-treated bed nets. Outdoor light traps were placed close to the outer wall under the roof of the collection house. Collected specimens were subjected to DNA extraction and diagnostic PCR to identify species within the Anopheles gambiae complex. Data were analysed by simple regression of log-transformed values and by Bayesian regression analysis. Results There was a poor correlation between the two collection methods. Results varied by location, venue, month, house, but also by the statistical method used. The more robust Bayesian analyses indicated non-linear relationships and relative sampling efficiencies being density dependent for the indoor collections, implying that straight-forward and simple conversion factors could not be calculated for any of the locations. Outdoor LTC:HLC relationships were weak, but could be estimated at 0.10 and 0.07 for each of two locations. Conclusions Light trap collections in combination with bed nets are not recommended as a reliable method to assess human

  6. Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Michael [Nexant Inc., Burlington, MA (United States); Schellenberg, Josh [Nexant Inc., Burlington, MA (United States); Blundell, Marshall [Nexant Inc., Burlington, MA (United States)

    2015-01-01

    This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.

  7. Reliable estimation of orbit errors in spaceborne SAR interferometry

    NARCIS (Netherlands)

    Bähr, H.; Hanssen, R.F.

    2012-01-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of

  8. Evaluation and reliability of bone histological age estimation methods

    African Journals Online (AJOL)

    The review of those methods showed that the nature of research for estimating age at death has been shifted from gross morphological analysis to histological analysis, and its further going towards the use of digital image processing tools to achieve high accuracy. Histological methods based on the analysis of bone ...

  9. Interrater reliability of Violence Risk Appraisal Guide scores provided in Canadian criminal proceedings.

    Science.gov (United States)

    Edens, John F; Penson, Brittany N; Ruchensky, Jared R; Cox, Jennifer; Smith, Shannon Toney

    2016-12-01

    Published research suggests that most violence risk assessment tools have relatively high levels of interrater reliability, but recent evidence of inconsistent scores among forensic examiners in adversarial settings raises concerns about the "field reliability" of such measures. This study specifically examined the reliability of Violence Risk Appraisal Guide (VRAG) scores in Canadian criminal cases identified in the legal database, LexisNexis. Over 250 reported cases were located that made mention of the VRAG, with 42 of these cases containing 2 or more scores that could be submitted to interrater reliability analyses. Overall, scores were skewed toward higher risk categories. The intraclass correlation (ICCA1) was .66, with pairs of forensic examiners placing defendants into the same VRAG risk "bin" in 68% of the cases. For categorical risk statements (i.e., low, moderate, high), examiners provided converging assessment results in most instances (86%). In terms of potential predictors of rater disagreement, there was no evidence for adversarial allegiance in our sample. Rater disagreement in the scoring of 1 VRAG item (Psychopathy Checklist-Revised; Hare, 2003), however, strongly predicted rater disagreement in the scoring of the VRAG (r = .58). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Empirical Study of Travel Time Estimation and Reliability

    OpenAIRE

    Li, Ruimin; Chai, Huajun; Tang, Jin

    2013-01-01

    This paper explores the travel time distribution of different types of urban roads, the link and path average travel time, and variance estimation methods by analyzing the large-scale travel time dataset detected from automatic number plate readers installed throughout Beijing. The results show that the best-fitting travel time distribution for different road links in 15 min time intervals differs for different traffic congestion levels. The average travel time for all links on all days can b...

  11. Reliable estimation of virtual source position for SAFT imaging.

    Science.gov (United States)

    Chang, Chih-Hsiung; Chang, Young-Fo; Ma, Yushieh; Shung, K K

    2013-02-01

    The synthetic aperture focusing technique (SAFT), employing a scanned focused transducer as a virtual source, is commonly used to image flaws in immersion testing. The position of a virtual source is estimated from rays emitted from the rim of a focused transducer. However, it is often found that the virtual source position cannot be uniquely determined because of severe focal spot aberration at the focal zone. Based on an analysis of the energy radiated from the focused transducer and the refracted energy varied with the incident angle of ultrasound, we propose that paraxial rays emitted from the focused transducer are the best for estimating the position of a virtual source for incorporation into SAFT. This study results also shows that by using this simple virtual source position estimation for SAFT, the axial resolution and SNR of the reconstructed image can be greatly improved. This new approach minimizes the effect of such factors as refraction at high-velocity-contrast interfaces, distance of the transducer to the couplant-specimen interface, and the focal length of a focused transducer, which may cause focal spot aberration resulting in decreased sensitivity in SAFT imaging.

  12. Reliability estimation for single dichotomous items based on Mokken's IRT model

    NARCIS (Netherlands)

    Meijer, R R; Sijtsma, K; Molenaar, Ivo W

    1995-01-01

    Item reliability is of special interest for Mokken's nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of

  13. Reliability estimation for single dichotomous items based on Mokken's IRT model

    NARCIS (Netherlands)

    Meijer, R.R.; Sijtsma, Klaas; Molenaar, Ivo W.

    1995-01-01

    Item reliability is of special interest for Mokken’s nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of

  14. Approach to the complex estimation of factors, Influencing on communication networks’ reliability

    OpenAIRE

    Nenov, Alexey

    2010-01-01

    In this paper the classification of factors influencing the reliability of communication networks is given. The approach to the factors complex estimation on the basis of mechanism of petalous diagrams is offered.

  15. The essence of reliability estimation during operational life for achieving high system dependability

    NARCIS (Netherlands)

    Khan, M.A.; Kerkhoff, Hans G.

    2013-01-01

    System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring

  16. Validity and reliability of Optojump photoelectric cells for estimating vertical jump height.

    Science.gov (United States)

    Glatthorn, Julia F; Gouge, Sylvain; Nussbaumer, Silvio; Stauffacher, Simone; Impellizzeri, Franco M; Maffiuletti, Nicola A

    2011-02-01

    Vertical jump is one of the most prevalent acts performed in several sport activities. It is therefore important to ensure that the measurements of vertical jump height made as a part of research or athlete support work have adequate validity and reliability. The aim of this study was to evaluate concurrent validity and reliability of the Optojump photocell system (Microgate, Bolzano, Italy) with force plate measurements for estimating vertical jump height. Twenty subjects were asked to perform maximal squat jumps and countermovement jumps, and flight time-derived jump heights obtained by the force plate were compared with those provided by Optojump, to examine its concurrent (criterion-related) validity (study 1). Twenty other subjects completed the same jump series on 2 different occasions (separated by 1 week), and jump heights of session 1 were compared with session 2, to investigate test-retest reliability of the Optojump system (study 2). Intraclass correlation coefficients (ICCs) for validity were very high (0.997-0.998), even if a systematic difference was consistently observed between force plate and Optojump (-1.06 cm; p height. We propose the following equation that allows force plate and Optojump results to be used interchangeably: force plate jump height (cm) = 1.02 × Optojump jump height + 0.29. In conclusion, the use of Optojump photoelectric cells is legitimate for field-based assessments of vertical jump height.

  17. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane.

    Science.gov (United States)

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B; Aanæs, Henrik; Alkjær, Tine; Simonsen, Erik B

    2014-09-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the present study was to develop a new approach based on highly detailed 3D reconstructions in combination with a translational and rotational unconstrained articulated model. The highly detailed 3D reconstructions were synthesized from an eight camera setup using a stereo vision approach. The subject specific articulated model was generated with three rotational and three translational degrees of freedom for each limb segment and without any constraints to the range of motion. This approach was tested on 3D gait analysis and compared to a marker based method. The experiment included ten healthy subjects in whom hip, knee and ankle joint were analysed. Flexion/extension angles as well as hip abduction/adduction closely resembled those obtained from the marker based system. However, the internal/external rotations, knee abduction/adduction and ankle inversion/eversion were less reliable. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Improving reliability of state estimation programming and computing suite based on analyzing a fault tree

    Directory of Open Access Journals (Sweden)

    Kolosok Irina

    2017-01-01

    Full Text Available Reliable information on the current state parameters obtained as a result of processing the measurements from systems of the SCADA and WAMS data acquisition and processing through methods of state estimation (SE is a condition that enables to successfully manage an energy power system (EPS. SCADA and WAMS systems themselves, as any technical systems, are subject to failures and faults that lead to distortion and loss of information. The SE procedure enables to find erroneous measurements, therefore, it is a barrier for the distorted information to penetrate into control problems. At the same time, the programming and computing suite (PCS implementing the SE functions may itself provide a wrong decision due to imperfection of the software algorithms and errors. In this study, we propose to use a fault tree to analyze consequences of failures and faults in SCADA and WAMS and in the very SE procedure. Based on the analysis of the obtained measurement information and on the SE results, we determine the state estimation PCS fault tolerance level featuring its reliability.

  19. Reliability/Cost Evaluation on Power System connected with Wind Power for the Reserve Estimation

    DEFF Research Database (Denmark)

    Lee, Go-Eun; Cha, Seung-Tae; Shin, Je-Seok

    2012-01-01

    Wind power is ideally a renewable energy with no fuel cost, but has a risk to reduce reliability of the whole system because of uncertainty of the output. If the reserve of the system is increased, the reliability of the system may be improved. However, the cost would be increased. Therefore...... the reserve needs to be estimated considering the trade-off between reliability and economic aspects. This paper suggests a methodology to estimate the appropriate reserve, when wind power is connected to the power system. As a case study, when wind power is connected to power system of Korea, the effects...

  20. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  1. Reliance on and Reliability of the Engineer’s Estimate in Heavy Civil Projects

    Directory of Open Access Journals (Sweden)

    George Okere

    2017-06-01

    Full Text Available To the contractor, the engineer’s estimate is the target number to aim for, and the basis for a contractor to evaluate the accuracy of their estimate. To the owner, the engineer’s estimate is the basis for funding, evaluation of bids, and for predicting project costs. As such the engineer’s estimate is the benchmark. This research sought to investigate the reliance on, and the reliability of the engineer’s estimate in heavy civil cost estimate. The research objective was to characterize the engineer’s estimate and allow owners and contractors re-evaluate or affirm their reliance on the engineer’s estimate. A literature review was conducted to understand the reliance on the engineer’s estimate, and secondary data from Washington State Department of Transportation was used to investigate the reliability of the engineer’s estimate. The findings show the need for practitioners to re-evaluate their reliance on the engineer’s estimate. The empirical data showed that, within various contexts, the engineer’s estimate fell outside the expected accuracy range of the low bids or the cost to complete projects. The study recommends direct tracking of costs by project owners while projects are under construction, the use of a second estimate to improve the accuracy of their estimates, and use of the cost estimating practices found in highly reputable construction companies.

  2. Pronation and supination after forearm fractures in children: Reliability of visual estimation and conventional goniometry measurement.

    Science.gov (United States)

    Colaris, Joost; van der Linden, Marleen; Selles, Ruud; Coene, Napoleon; Allema, Jan Hein; Verhaar, Jan

    2010-06-01

    Forearm fractures are common amongst children and often result in limited rotational function. In daily practice, pronation and supination of the arm are often visually estimated or measured using a conventional goniometer. The aim of this study was to compare the reliability of these two methods in paediatric patients who had previously sustained a forearm fracture. Intra- and interrater reliability of visual estimation and conventional goniometry were determined in 47 children who had previously sustained a forearm fracture. Intra- and interrater reliability of visual estimation and conventional goniometry was fair to excellent, with intraclass correlation coefficients (ICCs) ranging between 0.75 and 0.94. In addition, the overall goniometer data consistently showed lower smallest detectable differences (SDDs) compared to the visual estimation data, also indicating better reliability for the goniometer method. A conventional goniometer is an easy, fast and reliable method to determine the pronation and supination in a child who had sustained a forearm fracture. If an uncooperative child hinders the measurement, visual estimation is a good second choice. Measurements are more reliable when repeated by the same professional. Copyright 2010 Elsevier Ltd. All rights reserved.

  3. Reliance on and Reliability of the Engineer’s Estimate in Heavy Civil Projects

    OpenAIRE

    George Okere

    2017-01-01

    To the contractor, the engineer’s estimate is the target number to aim for, and the basis for a contractor to evaluate the accuracy of their estimate. To the owner, the engineer’s estimate is the basis for funding, evaluation of bids, and for predicting project costs. As such the engineer’s estimate is the benchmark. This research sought to investigate the reliance on, and the reliability of the engineer’s estimate in heavy civil cost estimate. The research objective was to characterize the e...

  4. Empirical vs. Expected IRT-Based Reliability Estimation in Computerized Multistage Testing (MST)

    Science.gov (United States)

    Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David

    2006-01-01

    Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…

  5. Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis

    Science.gov (United States)

    Zhou, Z.-G.; Tang, P.; Zhou, M.

    2016-06-01

    Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  6. Algorithm for post-clustering curation of DNA amplicon data yields reliable biodiversity estimates

    DEFF Research Database (Denmark)

    Froslev, Tobias Guldberg; Kjoller, Rasmus; Bruun, Hans Henrik

    2017-01-01

    DNA metabarcoding is promising for cost-effective biodiversity monitoring, but reliable diversity estimates are difficult to achieve and validate. Here we present and validate a method, called LULU, for removing erroneous molecular operational taxonomic units (OTUs) from community data derived...... soil from 130 sites in Denmark spanning major environmental gradients. OTU tables are produced with several different OTU definition algorithms and subsequently curated with LULU, and validated against field survey data. LULU curation consistently improves α-diversity estimates and other biodiversity...... metrics, and does not require a sequence reference database; thus, it represents a promising method for reliable biodiversity estimation....

  7. A Data-Driven Reliability Estimation Approach for Phased-Mission Systems

    Directory of Open Access Journals (Sweden)

    Hua-Feng He

    2014-01-01

    Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.

  8. Parameter estimation of ion current formulations requires hybrid optimization approach to be both accurate and reliable

    Directory of Open Access Journals (Sweden)

    Axel eLoewe

    2016-01-01

    Full Text Available Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way towards tailored therapies in the last years. To fully leverage in-silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today's high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurate and reliably.In this work, two classes of optimization approaches are evaluated: gradient-based trust-region reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that none of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess

  9. Lifetime prediction and reliability estimation methodology for Stirling-type pulse tube refrigerators by gaseous contamination accelerated degradation testing

    Science.gov (United States)

    Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng

    2017-12-01

    Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.

  10. An Allocation Scheme for Estimating the Reliability of a Parallel-Series System

    Directory of Open Access Journals (Sweden)

    Zohra Benkamra

    2012-01-01

    Full Text Available We give a hybrid two-stage design which can be useful to estimate the reliability of a parallel-series and/or by duality a series-parallel system. When the components' reliabilities are unknown, one can estimate them by sample means of Bernoulli observations. Let T be the total number of observations allowed for the system. When T is fixed, we show that the variance of the system reliability estimate can be lowered by allocation of the sample size T at components' level. This leads to a discrete optimization problem which can be solved sequentially, assuming T is large enough. First-order asymptotic optimality is proved systematically and validated T Monte Carlo simulation.

  11. An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs

    Science.gov (United States)

    Basalamah, Anas; Sato, Takuro

    For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.

  12. How to reliably estimate the tortuosity of an animal's path: straightness, sinuosity, or fractal dimension?

    Science.gov (United States)

    Benhamou, Simon

    2004-07-21

    The tortuosity of an animal's path is a key parameter in orientation and searching behaviours. The tortuosity of an oriented path is inversely related to the efficiency of the orientation mechanism involved, the best mechanism being assumed to allow the animal to reach its goal along a straight line movement. The tortuosity of a random search path controls the local searching intensity, allowing the animal to adjust its search effort to the local profitability of the environment. This paper shows that (1) the efficiency of an oriented path can be reliably estimated by a straightness index computed as the ratio between the distance from the starting point to the goal and the path length travelled to reach the goal, but such a simple index, ranging between 0 and 1, cannot be applied to random search paths; (2) the tortuosity of a random search path, ranging between straight line movement and Brownian motion, can be reliably estimated by a sinuosity index which combines the mean cosine of changes of direction with the mean step length; and (3) in the current state of the art, the fractal analysis of animals' paths, which may appear as an alternative and promising way to measure the tortuosity of a random search path as a fractal dimension ranging between 1 (straight line movement) and 2 (Brownian motion), is only liable to generate artifactual results. This paper also provides some help for distinguishing between oriented and random search paths, and depicts a general, comprehensive framework for analysing individual animals' paths in a two-dimensional space.

  13. Citizen science provides a reliable and scalable tool to track disease-carrying mosquitoes.

    Science.gov (United States)

    Palmer, John R B; Oltra, Aitana; Collantes, Francisco; Delgado, Juan Antonio; Lucientes, Javier; Delacour, Sarah; Bengoa, Mikel; Eritja, Roger; Bartumeus, Frederic

    2017-10-24

    Recent outbreaks of Zika, chikungunya and dengue highlight the importance of better understanding the spread of disease-carrying mosquitoes across multiple spatio-temporal scales. Traditional surveillance tools are limited by jurisdictional boundaries and cost constraints. Here we show how a scalable citizen science system can solve this problem by combining citizen scientists' observations with expert validation and correcting for sampling effort. Our system provides accurate early warning information about the Asian tiger mosquito (Aedes albopictus) invasion in Spain, well beyond that available from traditional methods, and vital for public health services. It also provides estimates of tiger mosquito risk comparable to those from traditional methods but more directly related to the human-mosquito encounters that are relevant for epidemiological modelling and scalable enough to cover the entire country. These results illustrate how powerful public participation in science can be and suggest citizen science is positioned to revolutionize mosquito-borne disease surveillance worldwide.

  14. Parameter Interval Estimation of System Reliability for Repairable Multistate Series-Parallel System with Fuzzy Data

    Science.gov (United States)

    2014-01-01

    The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ≥ 100. In addition, the optimal α-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728

  15. Reliability of Semiautomated Computational Methods for Estimating Tibiofemoral Contact Stress in the Multicenter Osteoarthritis Study

    Directory of Open Access Journals (Sweden)

    Donald D. Anderson

    2012-01-01

    Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  16. Intra- and inter-rater reliability of endonasal activity estimation in granulomatosis with polyangiitis (Wegener´s).

    Science.gov (United States)

    Garske, Ulrike; Haack, Andrea; Beltrán, Olga; Flores-Suárez, Louis F; Bremer, Jan P; Lamprecht, Peter; Hedderich, Jürgen; Quetz, Joachim; Gross, Wolfgang L; Ambrosch, Petra; Laudien, Martin

    2012-01-01

    Granulomatosis with polyangiitis (GPA) frequently starts with an affection of the nasal and paranasal mucosa. Localised GPA of the nasal mucosa or persistent disease activity ('grumbling disease') is often encountered even under immunosuppressive therapy. Necessity for reconstructive surgery is common and careful scheduling to prevent failure and minimise revision rates is crucial. Therefore, reliable estimation of GPA activity in the upper airways using a score is mandatory for diagnosis, follow-up and scheduling reconstructive surgery. Fifty endoscopic, endonasal images of 45 patients with GPA were used. Twelve (4 German, 8 Mexican) experienced (n=7) and inexperienced (n=5) physicians assessed GPA-activity at two times (T1/T2) in dichotomy and in a grading approach (none, mild, moderate and high activity) using the novel ENT Activity Score (ENTAS). All documents were written in English. Estimation of activity in dichotomy (none vs. mild/moderate/high): Cohen's Kappa (κ) for intra-rater reliability T1/T2 in inexperienced and experienced physicians was κ=0.58 (agreement 85%) and κ=0.72 (agreement 91%). The inter-rater reliability (Fleiss's κ) T1/T2 for inexperienced and experienced physicians was κ=0.62/κ=0.59 and κ=0.50/κ=0.58 respectively. Estimation of activity in grading approach (none, mild, moderate, high): for inexperienced physicians the intra-rater reliability T1/T2 was κ=0.67 (agreement 56%) and the inter-rater reliability at T1/T2 was κ=0.29 (intraclass correlation coefficient, ICC=0.69) and κ=0.27 (ICC=0.59). For experienced physicians the intra-rater reliability T1/T2 was κ=0.80 (agreement 67%) and the inter-rater reliability at T1 and T2 was κ=0.41 (ICC=0.77) and κ=0.39 (ICC=0.75) respectively. Intra-rater reliability is high in decision in dichotomy and even in grading activity. There is no difference for experienced or inexperienced physicians. Inter-rater reliability is high in dichotomy, but low for activity grading. Thus, the ENTAS

  17. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  18. Intrarater and interrater reliability of first metatarsophalangeal joint dorsiflexion: goniometry versus visual estimation.

    Science.gov (United States)

    Jones, Angela M; Curran, Sarah A

    2012-01-01

    Visual estimation (VE) and goniometric measurement (GM) are commonly used to assess first metatarsophalangeal joint dorsiflexion. The purposes of this study were to determine the intrarater and interrater reliability of VE and GM and to establish whether reliability was influenced by the experience of the examiner. Ten experienced and ten inexperienced examiners evaluated three real-size photographs of a first metatarsophalangeal joint positioned in various degrees of dorsiflexion on two separate occasions. Experienced examiners demonstrated excellent intrarater and interrater reliability for GM (intraclass correlation coefficient [ICC], >0.953; standard error of measurement [SEM], 1.8°-2.5°) compared with inexperienced examiners, who showed fair-to-good intrarater and interrater reliability (ICC, 0.322-0.597; SEM, 2.0°-3.0°). For VE, inexperienced examiners demonstrated fair-to-good interrater and excellent intra-rater reliability (ICC, 0.666-0.808), which was higher compared with experienced examiners (ICC, 0.167-0.672). The SEM (2.8°-4.4°) was less varied than that of experienced examiners (SEM, 3.8°-6.4°) for VE, but neither group's SEMs were clinically acceptable. Although minimal differences between intrarater and interrater reliability of GM and VE are noted, this study suggests that GM is more reliable than VE is when used by experienced examiners. These findings support the continued use of GM for first metatarsophalangeal joint dorsiflexion assessment.

  19. Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos; Limnios, Nikolaos

    2016-01-01

    In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...

  20. The Use of Generalizability Theory to Estimate Data Reliability in Single-Subject Observational Research

    Science.gov (United States)

    Lei, Pui-Wa; Smith, Maria; Suen, Hoi K.

    2007-01-01

    Direct observation of behaviors is a data collection method customarily used in clinical and educational settings. Repeated measures and small samples are inherent characteristics of observational studies that pose challenges to the numerical estimation of reliability for observational data. In this article, we review some debates about the use of…

  1. Estimating reliability coefficients with heterogeneous item weightings using Stata: A factor based approach

    NARCIS (Netherlands)

    Boermans, M.A.; Kattenberg, M.A.C.

    2011-01-01

    We show how to estimate a Cronbach's alpha reliability coefficient in Stata after running a principal component or factor analysis. Alpha evaluates to what extent items measure the same underlying content when the items are combined into a scale or used for latent variable. Stata allows for testing

  2. Reliability of nationwide prevalence estimates of dementia : A critical appraisal based on Brazilian surveys

    NARCIS (Netherlands)

    Chaimowicz, F. (Flávio); A. Burdorf (Alex)

    2015-01-01

    textabstractBackground: The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia

  3. Using Final Kepler Catalog Completeness and Reliability Products in Exoplanet Occurrence Rate Estimates

    Science.gov (United States)

    Bryson, Steve; Burke, Christopher; Batalha, Natalie Marie; Thompson, Susan E.; Coughlin, Jeffrey; Christiansen, Jessie; Mullally, Fergal; Shabram, Megan; Kepler Team

    2018-01-01

    Burke et. al. 2015 presented an exoplanet occurrence rate estimate based on the Q1-Q16 Kepler Planet Candidate catalog. That catalog featured uniform planet candidate vetting and analytic approximations to the detection completeness (the fraction of true planets that would be detected) for each target star. We present an extension of that occurrence rate work using the final DR25 Kepler Planet Candidate catalog products, which uses higher-accuracy detection completeness data for each target star, and adds estimates of vetting completeness (the fraction of detected true planets correctly identified as planet candidates) and vetting reliability (the fraction of planet candidates that are true planets). These completeness and reliability products are based on synthetic manipulations of Kepler data, including transit injection, data scrambling, and inversion. We describe how each component is incorporated into the occurrence rate estimate, and how they impact the occurrence rate estimate both individually and in combination. We discuss the strengths and weaknesses of the completeness and reliability products and how they impact our confidence in the occurrence rate values uncertainties. This work is an example of how the community can use the DR25 completeness and reliability products, which are publicly available at the NASA Exoplanet Archive (http://exoplanetarchive.ipac.caltech.edu) and the Mikulski Archive for Space Telescopes (http://archive.stsci.edu/kepler).

  4. Procedures for reliable estimation of viral fitness from time-series data

    NARCIS (Netherlands)

    Bonhoeffer, S.; Barbour, A.D.; Boer, R.J. de

    2002-01-01

    In order to develop a better understanding of the evolutionary dynamics of HIV drug resistance, it is necessary to quantify accurately the in vivo fitness costs of resistance mutations. However, the reliable estimation of such fitness costs is riddled with both theoretical and experimental

  5. Can a sample of Landsat sensor scenes reliably estimate the global extent of tropical deforestation?

    Science.gov (United States)

    R. L. Czaplewski

    2003-01-01

    Tucker and Townshend (2000) conclude that wall-to-wall coverage is needed to avoid gross errors in estimations of deforestation rates' because tropical deforestation is concentrated along roads and rivers. They specifically question the reliability of the 10% sample of Landsat sensor scenes used in the global remote sensing survey conducted by the Food and...

  6. Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior

    NARCIS (Netherlands)

    Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.

    2016-01-01

    This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior

  7. Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior

    NARCIS (Netherlands)

    Yang, J.; Poot, D.H.J.; Caan, Matthan WA; Su, Tanja; Majoie, Charles BLM; van Vliet, L.J.; Vos, F.M.

    2016-01-01

    Purpose
    This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior.

    Methods
    Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior

  8. Reliable dual tensor model estimation in single and crossing fibers based on jeffreys prior

    NARCIS (Netherlands)

    J. Yang (Jianfei); D.H.J. Poot; M.W.A. Caan (Matthan); Su, T. (Tanja); C.B. Majoie (Charles); L.J. van Vliet (Lucas); F. Vos (Frans)

    2016-01-01

    textabstractPurpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD).

  9. Reliability of environmental and occupational exposure data provided by surrogate respondents in a case-control study of Parkinson's disease.

    Science.gov (United States)

    Wang, F L; Semchuk, K M; Love, E J

    1994-07-01

    This study used data provided by 40 non-demented Parkinson's disease patients and 101 community controls, and by their 110 spouses and 31 adult children to assess the reliability of surrogate-provided rural environmental and occupational exposure information on the index subjects. The level of overall raw agreement between the index subjects and the spouse or adult child surrogates varied from 50.0 to 100.0% for the case-surrogate group and from 80.6 to 96.0% for the control-surrogate group. We did not detect significant differences in overall raw agreement between the case-surrogate and control-surrogate groups or between the spouse-surrogate and adult child-surrogate groups, for any of the variables studied. Considering all index subjects and their surrogates, the level of overall raw agreement was 80.3% for well water consumption, 82.3% for farm living, 85.8% for agricultural work, 87.1% for use of pesticides, 87.9% for field crop farming and 91.9% for use of fertilizers. However, the kappa estimates were lower, varying from 0.48 (SE = 0.20) for fertilizer use to 0.66 (SE = 0.11) for crop farming. The level of specific agreement was 52.2% for fertilizer use, 64.0% for pesticide use, 71.4% for agricultural work, 73.9% for crop farming, 80.9% for farm living, and 83.6% for well water consumption. The overall findings of this study support the use, if necessary, of spouses and adult children of index subjects as surrogate respondents in case-control studies of rural environment and occupational exposures and Parkinson's disease and, possibly, other neurologic diseases. Specific agreement seems to be a better index of reliability than overall agreement in studies where exposure is rare.

  10. Gender differences in pension wealth: estimates using provider data.

    Science.gov (United States)

    Johnson, R W; Sambamoorthi, U; Crystal, S

    1999-06-01

    Information from pension providers was examined to investigate gender differences in pension wealth at midlife. For full-time wage and salary workers approaching retirement age who had pension coverage, median pension wealth on the current job was 76% greater for men than women. Differences in wages, years of job tenure, and industry between men and women accounted for most of the gender gap in pension wealth on the current job. Less than one third of the wealth difference could not be explained by gender differences in education, demographics, or job characteristics. The less-advantaged employment situation of working women currently in midlife carries over into worse retirement income prospects. However, the gender gap in pensions is likely to narrow in the future as married women's employment experiences increasingly resemble those of men.

  11. How to combine multiple techniques to provide reliable in-situ time series for climate applications

    Science.gov (United States)

    Valladeau, G.; Soudarin, L.; Gravelle, M.

    2012-12-01

    Tide gauge measurements, as observations dedicated to climate applications, require a rigorous quality control since measurements are highly sensitive to biases or drifts in datasets. One major part of the error related to the assessment of Sea Surface Height at tide gauge location originates in vertical movements. Indeed, many studies have for instance demonstrated the need for tide gauges to be corrected for land motion when compared with altimeter data. The combination of multiple techniques (altimeter, in-situ and geodetic data) is a way of providing relevant tide gauge time series for end-users and climate applications such as the contribution of ice-sheet mass balance to the global sea-level. In this way, DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) as well as GNSS (Global Navigation Satellite System) are considered as complementary techniques. They determine the crustal motion at a cm (or better) and mm/yr accuracy for the positions and velocities respectively. As the DORIS network was deployed by a geodetic institution, great care was taken when selecting the geographical location of the tracking stations to co-locate them with other space geodetic techniques (VLBI, SLR and GNSS), but also with tide gauges. Hence, as on May 2012, 22 DORIS stations are within 10 km from a tide gauge, including 8 within 500m. Ties between the DORIS antennas and the nearby tide gauge are also available when the measurement is possible. This study focuses on the example of the Thule tide gauge for which measurements are compared to the different techniques previously described. First, the comparison to both DORIS and GNSS data provides relevant information about the strong crustal movement North of Greenland. Then the use of altimeter data confirms results deduced from geodetic stations and give a larger view on the behavior of land motion around the Thule tide gauge. Therefore, the combination of multiple techniques is used to provide reliable tide

  12. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    Science.gov (United States)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  13. Estimation and enhancement of real-time software reliability through mutation analysis

    Science.gov (United States)

    Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.

    1992-01-01

    A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.

  14. Is visual estimation of passive range of motion in the pediatric lower limb valid and reliable

    Directory of Open Access Journals (Sweden)

    Dagher Fernand

    2009-10-01

    Full Text Available Abstract Background Visual estimation (VE is an essential tool for evaluation of range of motion. Few papers discussed its validity in children orthopedics' practice. The purpose of our study was to assess validity and reliability of VE for passive range of motions (PROMs of children's lower limbs. Methods Fifty typically developing children (100 lower limbs were examined. Visual estimations for PROMs of hip (flexion, adduction, abduction, internal and external rotations, knee (flexion and popliteal angle and ankle (dorsiflexion and plantarflexion were made by a pediatric orthopaedic surgeon (POS and a 5th year resident in orthopaedics. A last year medical student did goniometric measurements. Three weeks later, same measurements were performed to assess reliability of visual estimation for each examiner. Results Visual estimations of the POS were highly reliable for hip flexion, hip rotations and popliteal angle (ρc ≥ 0.8. Reliability was good for hip abduction, knee flexion, ankle dorsiflexion and plantarflexion (ρc ≥ 0.7 but poor for hip adduction (ρc = 0.5. Reproducibility for all PROMs was verified. Resident's VE showed high reliability (ρc ≥ 0.8 for hip flexion and popliteal angle. Good correlation was found for hip rotations and knee flexion (ρc ≥ 0.7. Poor results were obtained for ankle PROMs (ρc Conclusion Accuracy of VE of passive hip flexion and knee PROMs is high regardless of the examiner's experience. Same accuracy can be found for hip rotations and abduction whenever VE is performed by an experienced examiner. Goniometric evaluation is recommended for passive hip adduction and for ankle PROMs.

  15. RELIABILITY OF FLIGHT DATA PROVIDED BY PORTABLE GPS RECEIVERS DURING AIR ACCIDENT INVESTIGATION

    Directory of Open Access Journals (Sweden)

    V. M. Rukhlinskiy

    2014-01-01

    Full Text Available This article considers the problem of assessment of data authenticity related to reliability of the flight data obtained in processing the information of portable GPS receivers’ during air accident investigations, which are characterized by the lack of standard flight data recording systems. Appropriate recommendations the improvement improving of the newly developed GPS receivers have been developed and implemented.

  16. Soil sampling protocol reliably estimates preplant NO3− in SDI tomatoes

    Directory of Open Access Journals (Sweden)

    Cristina Lazcano

    2015-10-01

    Full Text Available Subsurface drip irrigation (SDI, because it can precisely deliver nutrients close to plant roots, could lead to carefully determined applications of fertilizer to meet crop needs and less risk of nitrate (NO3− leaching to groundwater. Appropriate fertilizer applications, however, depend on an accurate assessment of the spatial distribution of the main plant macronutrients (N, P and K in the soil profile before planting. To develop nutrient sampling guidelines, we determined the spatial distributions of preplant nitrate (NO3−, bicarbonate extractable phosphorus (Olsen-P and exchangeable potassium (K in the top 20 inches (50 centimeters of subsurface drip irrigated processing tomato fields in three of the main growing regions in the Central Valley of California. Nutrient distribution varied with depth (P and K, distance from the center of the bed (NO3− and growing region (NO3− and K. No depletion of NO3−, Olsen-P or K in the root feeding areas close to the drip tape was detected. Preplant NO3− ranged considerably, from 45 to 438 pounds N per acre (50 to 491 kilograms/hectare, the higher levels in fields with consecutive crops of tomatoes. A sampling protocol that growers could use, developed from analysis of the distribution results, provided reliable estimates of preplant NO3− as well as P and K in all surveyed fields.

  17. Preliminary investigation on reliability of genomic estimated breeding values in the Danish and Swedish Holstein Population

    DEFF Research Database (Denmark)

    Su, G; Guldbrandtsen, B; Gregersen, V R

    2010-01-01

    Abstract This study investigated the reliability of genomic estimated breeding values (GEBV) in the Danish Holstein population. The data in the analysis included 3,330 bulls with both published conventional EBV and single nucleotide polymorphism (SNP) markers. After data editing, 38,134 SNP markers...... were available. In the analysis, all SNP were fitted simultaneously as random effects in a Bayesian variable selection model, which allows heterogeneous variances for different SNP markers. The response variables were the official EBV. Direct GEBV were calculated as the sum of individual SNP effects...... for all 18 index traits. Reliability of GEBV was assessed by squared correlation between GEBV and conventional EBV (r2GEBV, EBV), and expected reliability was obtained from prediction error variance using a 5-fold cross validation. Squared correlations between GEBV and published EBV (without any...

  18. Reliability of piping system components. Framework for estimating failure parameters from service data

    Energy Technology Data Exchange (ETDEWEB)

    Nyman, R. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hegedus, D.; Tomic, B. [ENCONET Consulting GesmbH, Vienna (Austria); Lydell, B. [RSA Technologies, Vista, CA (United States)

    1997-12-01

    This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed `PFCA`-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies. 63 refs, 30 tabs, 22 figs.

  19. Reliability and validity of food portion size estimation from images using manual flexible digital virtual meshes.

    Science.gov (United States)

    Beltran, Alicia; Dadabhoy, Hafza; Ryan, Courtney; Dholakia, Ruchita; Baranowski, Janice; Li, Yuecheng; Yan, Guifang; Jia, Wenyan; Sun, Mingui; Baranowski, Tom

    2018-02-12

    The eButton takes frontal images at 4s intervals throughout the day. A three-dimensional manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional images. The present paper reports a test of the inter-rater reliability and validity of use of the wire mesh procedure. Seventeen foods of diverse shapes and sizes served on plates, bowls and cups were selected to rigorously test the portion assessment procedure. A dietitian not involved in inter-rater reliability assessment used standard cups to independently measure the quantities of foods to generate the 'true' value for a total of seventy-five 'served' and seventy-five smaller 'left' images with diverse portion sizes. The images appeared on the computer to which the digital wire meshes were applied. Two dietitians and three engineers independently estimated portion size of the larger ('served') and smaller ('left') images for the same foods. The engineers had higher reliability and validity than the dietitians. The dietitians had lower reliabilities and validities for the smaller more irregular images, but the engineers did not, suggesting training could overcome this limitation. The lower reliabilities and validities for foods served in bowls, compared with plates, suggest difficulties with the curved nature of the bowls. The wire mesh procedure is an important step forward in quantifying portion size, which has been subject to substantial self-report error. Improved training procedures are needed to overcome the identified problems.

  20. Interval Estimation of Stress-Strength Reliability Based on Lower Record Values from Inverse Rayleigh Distribution

    Directory of Open Access Journals (Sweden)

    Bahman Tarvirdizade

    2014-01-01

    Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.

  1. Questioning the reliability of estimates of enzyme inhibitor constant: Case of competitive inhibition

    CERN Document Server

    Dhatt, Sharmistha

    2016-01-01

    Reliability of kinetic parameters are crucial in understanding enzyme kinetics within cellular system. The present study suggests a few cautions that need introspection for estimation of parameters like K(M), V(max) and K(I) using Lineweaver-Burk plots. The quality of IC(50) too needs a thorough reinvestigation because of its direct link with K(I) and K(M) values. Inhibition kinetics under both steady-state and non-steady-state conditions are studied and errors in estimated parameters are compared against actual values to settle the question of their adequacy.

  2. Empiric reliability of diagnostic and prognostic estimations of physical standards of children, going in for sports.

    Directory of Open Access Journals (Sweden)

    Zaporozhanov V.A.

    2012-12-01

    Full Text Available In the conditions of sporting-pedagogical practices objective estimation of potential possibilities gettings busy already on the initial stages of long-term preparation examined as one of issues of the day. The proper quantitative information allows to individualize preparation of gettings in obedience to requirements to the guided processes busy. Research purpose - logically and metrical to rotin expedience of metrical method of calculations of reliability of results of the control measurings, in-use for diagnostics of psychophysical fitness and prognosis of growth of trade gettings busy in the select type of sport. Material and methods. Analysed the results of the control measurings on four indexes of psychophysical preparedness and estimation of experts of fitness 24th gettings busy composition of children of gymnastic school. The results of initial and final inspection of gymnasts on the same control tests processed the method of mathematical statistics. Expected the metrical estimations of reliability of measurings is stability, co-ordination and informing of control information for current diagnostics and prognosis of sporting possibilities inspected. Results. Expedience of the use in these aims of metrical operations of calculation of complex estimation of the psychophysical state of gettings busy is metrology grounded. Conclusions. Research results confirm expedience of calculation of complex estimation of psychophysical features gettings busy for diagnostics of fitness in the select type of sport and trade prognosis on the subsequent stages of preparation.

  3. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis

    Science.gov (United States)

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed. PMID:26167524

  4. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis

    Directory of Open Access Journals (Sweden)

    Adela-Eliza Dumitrascu

    2015-01-01

    Full Text Available Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram, which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.

  5. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.

    Science.gov (United States)

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.

  6. Accuracy and inter-observer reliability of visual estimation compared to clinical goniometry of the elbow.

    Science.gov (United States)

    Blonna, Davide; Zarkadas, Peter C; Fitzsimmons, James S; O'Driscoll, Shawn W

    2012-07-01

    To test the hypothesis that visual estimation by a trained observer is as accurate and reliable as clinical goniometry for measuring elbow range of motion. Instrument validity and inter-observer reliability of visual estimation was evaluated on a consecutive series of 50 elbow contractures. Four observers with different levels of elbow experience first estimated extension and flexion of the contracted elbows and then measured them with a blinded goniometer. Instrument validity for visually-based goniometry was extremely high. ICC scores were 0.97 for both extension and flexion estimations. Systematic error was negligible (1°) with upper limits of agreement being 9° (95% CI: 7°-11°) and 8° (95% CI: 6°-10°), respectively, for extension and flexion. For the expert surgeon, 92% of the visual estimates were within 5° of the value obtained by clinical goniometry. Between experienced observers (elbow surgeon and physician assistant), the ICC's were very high-0.96 for extension and 0.93 for flexion. The systematic errors were low, from -1° to 1° with upper limit of agreement being 11° (95% CI: 8°-14°). However, agreement was poor between an inexperienced study coordinator and the others (ICC's: 0.51-0.38, systematic errors: 8°-18°, upper limit of agreement: 32°-40°). The accuracy of the visual estimations made by the experienced elbow surgeon was as good as the measurements taken with a goniometer by the physician assistant or the clinical fellow and better than those taken by an inexperienced study coordinator. The trained human eye is highly capable of accurately estimating the range of motion of the elbow, compared to conventional clinical goniometry, depending on the experience of the observer. Diagnostic study, Level II.

  7. A spectral-domain optical coherence tomography device provides reliable corneal pachymetry measurements in canine eyes.

    Science.gov (United States)

    Alario, A F; Pirie, C G

    2013-06-08

    The objective of this study was to determine central corneal thickness (CCT) and the intra- and interuser reliability using a portable spectral-domain optical coherence tomography (SD-OCT) device in canine eyes. Twenty clinically normal dogs were examined. CCT measurements were obtained from both eyes of each animal three times by two operators in succession. The CCT was automatically calculated using the pachymetry software generated from eight radial scans, 6 mm in length. Mean canine CCT was 606.83±39.45 μm for all eyes examined. There was no significant difference in CCT based on the eye examined (OD vs OS), age or gender of the animal. There was no significant difference in CCT between replicates performed by the same operator; however, a small but significant difference was noted in CCT between operators. The mean difference in CCT between operators was 1.9 μm (P=0.03). The coefficient of variation for each user and between users was very low (range 0.64-1.7 per cent). The intraclass correlation coefficient comparing operators was 0.975. Based on these results, the SD-OCT device evaluated is capable of obtaining precise CCT measurements with excellent intra- and interoperator reliability in canine eyes.

  8. RELAP5/MOD3.3 Best Estimate Analyses for Human Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Andrej Prošek

    2010-01-01

    Full Text Available To estimate the success criteria time windows of operator actions the conservative approach was used in the conventional probabilistic safety assessment (PSA. The current PSA standard recommends the use of best-estimate codes. The purpose of the study was to estimate the operator action success criteria time windows in scenarios in which the human actions are supplement to safety systems actuations, needed for updated human reliability analysis (HRA. For calculations the RELAP5/MOD3.3 best estimate thermal-hydraulic computer code and the qualified RELAP5 input model representing a two-loop pressurized water reactor, Westinghouse type, were used. The results of deterministic safety analysis were examined what is the latest time to perform the operator action and still satisfy the safety criteria. The results showed that uncertainty analysis of realistic calculation in general is not needed for human reliability analysis when additional time is available and/or the event is not significant contributor to the risk.

  9. Reliability and Efficacy of Water Use Estimation Techniques and their Impact on Water Management and Policy

    Science.gov (United States)

    Singh, A.; Deeds, N.; Kelley, V.

    2012-12-01

    Estimating how much water is being used by various water users is key to effective management and optimal utilization of groundwater resources. This is especially true for aquifers like the Ogallala that are severely stressed and display depleting trends over the last many years. The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Water users within the 16 counties that comprise the HPWD draw from the Ogallala extensively. The HPWD has recently proposed flow-meters as well as various 'alternative methods' for water users to report water usage. Alternative methods include using a) site specific energy conversion factors to convert total amount of energy used (for pumping stations) to water pumped, b) reporting nozzle package (on center pivot irrigation systems) specifications and hours of usage, and c) reporting concentrated animal feeding operations (CAFOs). The focus of this project was to evaluate the reliability and effectiveness for each of these water use estimation techniques for regulatory purposes. Reliability and effectiveness of direct flow-metering devices was also addressed. Findings indicate that due to site-specific variability and hydrogeologic heterogeneity, alternative methods for estimating water use can have significant uncertainties associated with water use estimates. The impact of these uncertainties on overall water usage, conservation, and management was also evaluated. The findings were communicated to the Stakeholder Advisory Group and the Water Conservation District with guidelines and recommendations on how best to implement the various techniques.

  10. 49 CFR 375.409 - May household goods brokers provide estimates?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false May household goods brokers provide estimates? 375... REGULATIONS TRANSPORTATION OF HOUSEHOLD GOODS IN INTERSTATE COMMERCE; CONSUMER PROTECTION REGULATIONS Estimating Charges § 375.409 May household goods brokers provide estimates? A household goods broker must not...

  11. Estimating reliability of degraded system based on the probability density evolution with multi-parameter

    Directory of Open Access Journals (Sweden)

    Jiang Ge

    2017-01-01

    Full Text Available System degradation was usually caused by multiple-parameter degradation. The assessment result of system reliability by universal generating function was low accurate when compared with the Monte Carlo simulation. And the probability density function of the system output performance cannot be got. So the reliability assessment method based on the probability density evolution with multi-parameter was presented for complexly degraded system. Firstly, the system output function was founded according to the transitive relation between component parameters and the system output performance. Then, the probability density evolution equation based on the probability conservation principle and the system output function was established. Furthermore, probability distribution characteristics of the system output performance was obtained by solving differential equation. Finally, the reliability of the degraded system was estimated. This method did not need to discrete the performance parameters and can establish continuous probability density function of the system output performance with high calculation efficiency and low cost. Numerical example shows that this method is applicable to evaluate the reliability of multi-parameter degraded system.

  12. Estimating the influence of natural hazards on pipeline risk and system reliability

    Energy Technology Data Exchange (ETDEWEB)

    Porter, M.; Logue, C.; Savigny, W.; Esford, F.; Bruce, I. [BGC Engineering Inc., Vancouver, BC (Canada)

    2004-07-01

    The techniques for understanding pipeline hazard exposure were discussed along with methods to estimate pipeline risk and reliability. Although natural hazards, such as ground movement, are not a major contributor to pipeline failures compared to other causes such as third party damage, corrosion and material defects, the risk posed by geohazards is proportionally quite significant. In cases where difficult ground conditions have not been properly accounted for in pipeline design, construction, and operation, geohazards may have an overriding influence on pipeline risk and reliability. The many natural hazards that can threaten the safe and efficient operation of pipelines can be divided into 3 broad groups, namely geotechnical, hydrotechnical and tectonic hazards. This paper addressed the issue of of why the relative significance of natural hazards is often underestimated by the pipeline industry, with reference to published western European and U.S. incident data. Geohazards can cause pipeline failures, with consequences ranging from injury, death, environmental impact, property damage, and lengthy service disruption. This paper introduced a framework for estimating the influence of geohazards on pipeline risk and system reliability. 14 refs., 1 tab., 6 figs.

  13. Estimating the reliability of glycemic index values and potential sources of methodological and biological variability123

    Science.gov (United States)

    Matthan, Nirupa R; Ausman, Lynne M; Meng, Huicui; Tighiouart, Hocine

    2016-01-01

    Background: The utility of glycemic index (GI) values for chronic disease risk management remains controversial. Although absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value determinations and potential sources of variability among healthy adults. Objective: We examined the intra- and inter-individual variability in glycemic response to a single food challenge and methodologic and biological factors that potentially mediate this response. Design: The GI value for white bread was determined by using standardized methodology in 63 volunteers free from chronic disease and recruited to differ by sex, age (18–85 y), and body mass index [BMI (in kg/m2): 20–35]. Volunteers randomly underwent 3 sets of food challenges involving glucose (reference) and white bread (test food), both providing 50 g available carbohydrates. Serum glucose and insulin were monitored for 5 h postingestion, and GI values were calculated by using different area under the curve (AUC) methods. Biochemical variables were measured by using standard assays and body composition by dual-energy X-ray absorptiometry. Results: The mean ± SD GI value for white bread was 62 ± 15 when calculated by using the recommended method. Mean intra- and interindividual CVs were 20% and 25%, respectively. Increasing sample size, replication of reference and test foods, and length of blood sampling, as well as AUC calculation method, did not improve the CVs. Among the biological factors assessed, insulin index and glycated hemoglobin values explained 15% and 16% of the variability in mean GI value for white bread, respectively. Conclusions: These data indicate that there is substantial variability in individual responses to GI value determinations, demonstrating that it is unlikely to be a good approach to guiding food choices. Additionally, even in healthy individuals, glycemic status

  14. Estimating the reliability of glycemic index values and potential sources of methodological and biological variability.

    Science.gov (United States)

    Matthan, Nirupa R; Ausman, Lynne M; Meng, Huicui; Tighiouart, Hocine; Lichtenstein, Alice H

    2016-10-01

    The utility of glycemic index (GI) values for chronic disease risk management remains controversial. Although absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value determinations and potential sources of variability among healthy adults. We examined the intra- and inter-individual variability in glycemic response to a single food challenge and methodologic and biological factors that potentially mediate this response. The GI value for white bread was determined by using standardized methodology in 63 volunteers free from chronic disease and recruited to differ by sex, age (18-85 y), and body mass index [BMI (in kg/m2): 20-35]. Volunteers randomly underwent 3 sets of food challenges involving glucose (reference) and white bread (test food), both providing 50 g available carbohydrates. Serum glucose and insulin were monitored for 5 h postingestion, and GI values were calculated by using different area under the curve (AUC) methods. Biochemical variables were measured by using standard assays and body composition by dual-energy X-ray absorptiometry. The mean ± SD GI value for white bread was 62 ± 15 when calculated by using the recommended method. Mean intra- and interindividual CVs were 20% and 25%, respectively. Increasing sample size, replication of reference and test foods, and length of blood sampling, as well as AUC calculation method, did not improve the CVs. Among the biological factors assessed, insulin index and glycated hemoglobin values explained 15% and 16% of the variability in mean GI value for white bread, respectively. These data indicate that there is substantial variability in individual responses to GI value determinations, demonstrating that it is unlikely to be a good approach to guiding food choices. Additionally, even in healthy individuals, glycemic status significantly contributes to the variability in GI value

  15. THE SIMULATION DIAGNOSTIC METHODS AND REGENERATION WAYS OF REINFORCED - CONCRETE CONSTRUCTIONS OF BRIDGES IN PROVIDING THEIR OPERATING RELIABILITY AND LONGEVITY

    Directory of Open Access Journals (Sweden)

    B. V. Savchinskiy

    2010-03-01

    Full Text Available On the basis of analysis of existing diagnostic methods and regeneration ways of reinforced-concrete constructions of bridges the recommendations on introduction of new modern technologies of renewal of reinforced-concrete constructions of bridges in providing their operating reliability and longevity are offered.

  16. Reliability estimation of the milling machines teeth obtained by welding deposition process

    Directory of Open Access Journals (Sweden)

    Iovanas Daniela Maria

    2017-01-01

    Full Text Available Currently, governments and authorities worldwide allocate annually significant amounts of money in an attempt to maintain the existing road infrastructure functional and safe. In our country too there is this concern to rehabilitate a significant number of damaged road structures and to create new modern ones. The research presented in this paper focused on applying modern manufacturing technologies, loading by welding the milling teeth for stripped asphalt road structures in order to increase their reliability. It was thus estimated the reliability of classical teeth made from 41Cr4 and teeth manufactured through loading by welding with special filler materials in the active surfaces, which grants them self-protection systems from wear and locking in rotation and which equipped a Wirtgen 50 milling machine in operation on one of the national roads.

  17. Providing Reliability Services through Demand Response: A Prelimnary Evaluation of the Demand Response Capabilities of Alcoa Inc.

    Energy Technology Data Exchange (ETDEWEB)

    Starke, Michael R [ORNL; Kirby, Brendan J [ORNL; Kueck, John D [ORNL; Todd, Duane [Alcoa; Caulfield, Michael [Alcoa; Helms, Brian [Alcoa

    2009-02-01

    Demand response is the largest underutilized reliability resource in North America. Historic demand response programs have focused on reducing overall electricity consumption (increasing efficiency) and shaving peaks but have not typically been used for immediate reliability response. Many of these programs have been successful but demand response remains a limited resource. The Federal Energy Regulatory Commission (FERC) report, 'Assessment of Demand Response and Advanced Metering' (FERC 2006) found that only five percent of customers are on some form of demand response program. Collectively they represent an estimated 37,000 MW of response potential. These programs reduce overall energy consumption, lower green house gas emissions by allowing fossil fuel generators to operate at increased efficiency and reduce stress on the power system during periods of peak loading. As the country continues to restructure energy markets with sophisticated marginal cost models that attempt to minimize total energy costs, the ability of demand response to create meaningful shifts in the supply and demand equations is critical to creating a sustainable and balanced economic response to energy issues. Restructured energy market prices are set by the cost of the next incremental unit of energy, so that as additional generation is brought into the market, the cost for the entire market increases. The benefit of demand response is that it reduces overall demand and shifts the entire market to a lower pricing level. This can be very effective in mitigating price volatility or scarcity pricing as the power system responds to changing demand schedules, loss of large generators, or loss of transmission. As a global producer of alumina, primary aluminum, and fabricated aluminum products, Alcoa Inc., has the capability to provide demand response services through its manufacturing facilities and uniquely through its aluminum smelting facilities. For a typical aluminum smelter

  18. Reliability estimate of unconfined compressive strength of black cotton soil stabilized with cement and quarry dust

    Directory of Open Access Journals (Sweden)

    Dayo Oluwatoyin AKANBI

    2017-06-01

    Full Text Available Reliability estimates of unconfined compressive strength values from laboratory results for specimens compacted at British Standard Light (BSLfor compacted quarry dust treated black cotton soil using cement for road sub – base material was developed by incorporating data obtained from Unconfined compressive strength (UCS test gotten from the laboratory test to produce a predictive model. Data obtained were incorporated into a FORTRAN-based first-order reliability program to obtain reliability index values. Variable factors such as water content relative to optimum (WRO, hydraulic modulus (HM, quarry dust (QD, cement (C, Tri-Calcium silicate (C3S, Di-calcium silicate (C2S, Tri-Calcium Aluminate (C3A, and maximum dry density (MDD produced acceptable safety index value of1.0and they were achieved at coefficient of variation (COV ranges of 10-100%. Observed trends indicate that WRO, C3S, C2S and MDD are greatly influenced by the COV and therefore must be strictly controlled in QD/C treated black cotton soil for use as sub-base material in road pavements. Stochastically, British Standard light (BSL can be used to model the 7 days unconfined compressive strength of compacted quarry dust/cement treated black cotton soil as a sub-base material for road pavement at all coefficient of variation (COV range 10 – 100% because the safety index obtained are higher than the acceptable 1.0 value.

  19. Regional inversion of CO2 ecosystem fluxes from atmospheric measurements. Reliability of the uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)

    2013-07-01

    The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than

  20. Methods of the estimation of reliability of the software of failure-safe systems avioniks

    Directory of Open Access Journals (Sweden)

    О. А. Зеленков

    2003-03-01

    Full Text Available The paper deals with providing software reliability control for up to date on-board fault-tolerable computer-assisted management systems, created on the base of integral modular aircraft electronics technology, during operational period. The authors describe the method of software failures probability evaluation and serial calculating algorithm, based determining the margins of software failures number, according to operational control data

  1. The evidence provided by a single trial is less reliable than its statistical analysis suggests.

    NARCIS (Netherlands)

    Borm, G.F.; Lemmers, F.A.M.O.; Fransen, J.; Donders, A.R.T.

    2009-01-01

    OBJECTIVE: To investigate whether a single trial can provide sufficiently robust evidence to warrant clinical implementation of its results. Trial-specific factors, such as subject selection, study design, and execution strategy, have an impact on the outcome of trials. In multiple trials, they may

  2. Reliability estimation of grouped functional imaging data using penalized maximum likelihood.

    Science.gov (United States)

    Gullapalli, Rao P; Maitra, Ranjan; Roys, Steve; Smith, Gerald; Alon, Gad; Greenspan, Joel

    2005-05-01

    We analyzed grouped fMRI data and developed a reliability analysis for such data using the method of penalized maximum likelihood (ML). Specifically, this technique was applied to a somatosensory paradigm that used a mechanical probe to provide noxious stimuli to the foot, and a paradigm consisting of four levels of graded peripheral neuromuscular electrical stimulation (NMES). In each case, reliability maps of activation were generated. Receiver operating characteristic (ROC) curves were constructed in the case of the graded NMES paradigm for each level of stimulation, which revealed an increase in the specificity of activation with increasing stimulation levels. In addition, penalized ML was used to determine whether the grouped reliability maps obtained from one stimulus level were significantly different from those obtained at other levels. The results show a significant difference (P groups, as well as to perform reliability assessments. Furthermore, this method potentially can be used to assess the longitudinal effect of treatment therapies within a group. Copyright 2005 Wiley-Liss, Inc.

  3. Estimating the Optimal Capacity for Reservoir Dam based on Reliability Level for Meeting Demands

    Directory of Open Access Journals (Sweden)

    Mehrdad Taghian

    2017-02-01

    Full Text Available Introduction: One of the practical and classic problems in the water resource studies is estimation of the optimal reservoir capacity to satisfy demands. However, full supplying demands for total periods need a very high dam to supply demands during severe drought conditions. That means a major part of reservoir capacity and costs is only usable for a short period of the reservoir lifetime, which would be unjustified in economic analysis. Thus, in the proposed method and model, the full meeting demand is only possible for a percent time of the statistical period that is according to reliability constraint. In the general methods, although this concept apparently seems simple, there is a necessity to add binary variables for meeting or not meeting demands in the linear programming model structures. Thus, with many binary variables, solving the problem will be time consuming and difficult. Another way to solve the problem is the application of the yield model. This model includes some simpler assumptions and that is so difficult to consider details of the water resource system. The applicationof evolutionary algorithms, for the problems have many constraints, is also very complicated. Therefore, this study pursues another solution. Materials and Methods: In this study, for development and improvement the usual methods, instead of mix integer linear programming (MILP and the above methods, a simulation model including flow network linear programming is used coupled with an interface manual code in Matlab to account the reliability based on output file of the simulation model. The acre reservoir simulation program (ARSP has been utilized as a simulation model. A major advantage of the ARSP is its inherent flexibility in defining the operating policies through a penalty structure specified by the user. The ARSP utilizes network flow optimization techniques to handle a subset of general linear programming (LP problems for individual time intervals

  4. A unique method for estimating the reliability learning curve of optic nerve sheath diameter ultrasound measurement.

    Science.gov (United States)

    Zeiler, Frederick A; Ziesmann, Markus T; Goeres, Patrick; Unger, Bertram; Park, Jason; Karakitsos, Dimitrios; Blaivas, Michael; Vergis, Ashley; Gillman, Lawrence M

    2016-12-01

    Optic nerve sheath diameter (ONSD) measurement using ultrasound has been proposed as a rapid, non-invasive, point of care technique to estimate intra-cranial pressure (ICP). Ultrasonic measurement of the optic nerve sheath can be quite challenging and there is limited literature surrounding learning curves for this technique. We attempted to develop a method to estimate the reliability learning curve for ONSD measurement utilizing a unique definition of reliability: a plateau in within-subject variability with unchanged between-subject variability. As part of a previously published study, a single operator measured the ONSD in 120 healthy volunteers over a 6-month period. Utilizing the assumption that the four measurements made on each subject during this study should be equal, the relationship of within-subject variance was described using a quadratic-plateau model as assessed by segmental polynomial (knot) regression. Segmental polynomial (knot) regression revealed a plateau in within-subject variance after the 21st subject. However, there was no difference in overall mean values [3.69 vs 3.68 mm (p = 0.884)] or between-subject variance [14.49 vs 11.92 (p = 0.54)] above or below this cutoff. This study suggests a significant finite learning curve associated with ONSD measurements. It also offers a unique method of calculating the learning curve associated with ONSD measurement.

  5. Reliability of inter-anterior superior iliac spinous distance as compared to foot length for stature estimation in South Indians.

    Science.gov (United States)

    Nachiket, S; Sujatha, N; Priya, R; Raveendranath, V; Rema, D; Roopa, R

    2010-08-01

    Estimation of stature from isolated body parts is especially important to forensic scientists and anthropologists. The aims of this study were, to evaluate the accuracy of inter-anterior superior iliac spinous distance in determining stature of an individual as compared to foot length, and to note sex differences in the above mentioned parameters. One hundred normal healthy adult subjects from South India consisting of 50 males and 50 females were studied. The height, interspinous distance, and foot length of the subjects were measured. The mean and standard deviation of the height, interspinous distance, foot length and proportions of interspinous distance and foot length to height were calculated and significance testing done for sex differences. The correlation between stature and interspinous distance and stature and foot length was estimated, and linear regression equations for stature estimation were calculated. The height, interspinous distance and foot length were significantly greater in males. The proportion of interspinous distance to stature was significantly greater in females. Foot length showed a stronger positive correlation with stature compared to interspinous distance. When the sexes were considered separately the accuracy of stature estimation from interspinous distance increased greatly. Though interspinous distance is not as reliable as foot length in stature estimation, it could provide valuable data regarding the stature when isolated pelvises are available. Copyright 2010 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  6. A fast and reliable method for simultaneous waveform, amplitude and latency estimation of single-trial EEG/MEG data.

    Directory of Open Access Journals (Sweden)

    Wouter D Weeda

    Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.

  7. Validity and reliability of dental age estimation of teeth root translucency based on digital luminance determination.

    Science.gov (United States)

    Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A

    2014-01-01

    In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.

  8. Reliability of estimating the room volume from a single room impulse response.

    Science.gov (United States)

    Kuster, Martin

    2008-08-01

    The methods investigated for the room volume estimation are based on geometrical acoustics, eigenmode, and diffuse field models and no data other than the room impulse response are available. The measurements include several receiver positions in a total of 12 rooms of vastly different sizes and acoustic characteristics. The limitations in identifying the pivotal specular reflections of the geometrical acoustics model in measured room impulse responses are examined both theoretically and experimentally. The eigenmode method uses the theoretical expression for the Schroeder frequency and the difficulty of accurately estimating this frequency from the varying statistics of the room transfer function is highlighted. Reliable results are only obtained with the diffuse field model and a part of the observed variance in the experimental results is explained by theoretical expressions for the standard deviation of the reverberant sound pressure and the reverberation time. The limitations due to source and receiver directivity are discussed and a simple volume estimation method based on an approximate relationship with the reverberation time is also presented.

  9. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    Science.gov (United States)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and

  10. Reliability and validity of the Turkish version of the Rapid Estimate of Adult Literacy in Dentistry (TREALD-30).

    Science.gov (United States)

    Peker, Kadriye; Köse, Taha Emre; Güray, Beliz; Uysal, Ömer; Erdem, Tamer Lütfi

    2017-04-01

    To culturally adapt the Turkish version of Rapid Estimate of Adult Literacy in Dentistry (TREALD-30) for Turkish-speaking adult dental patients and to evaluate its psychometric properties. After translation and cross-cultural adaptation, TREALD-30 was tested in a sample of 127 adult patients who attended a dental school clinic in Istanbul. Data were collected through clinical examinations and self-completed questionnaires, including TREALD-30, the Oral Health Impact Profile (OHIP), the Rapid Estimate of Adult Literacy in Medicine (REALM), two health literacy screening questions, and socio-behavioral characteristics. Psychometric properties were examined using Classical Test Theory (CTT) and Rasch analysis. Internal consistency (Cronbach's Alpha = 0.91) and test-retest reliability (Intraclass correlation coefficient = 0.99) were satisfactory for TREALD-30. It exhibited good convergent and predictive validity. Monthly family income, years of education, dental flossing, health literacy, and health literacy skills were found as stronger predictors of patients'oral health literacy (OHL). Confirmatory factor analysis (CFA) confirmed a two-factor model. The Rasch model explained 37.9% of the total variance in this dataset. In addition, TREALD-30 had eleven misfitting items, which indicated evidence of multidimensionality. The reliability indeces provided in Rasch analysis (person separation reliability = 0.91 and expected-a-posteriori/plausible reliability = 0.94) indicated that TREALD-30 had acceptable reliability. TREALD-30 showed satisfactory psychometric properties. It may be used to identify patients with low OHL. Socio-demographic factors, oral health behaviors and health literacy skills should be taken into account when planning future studies to assess the OHL in both clinical and community settings.

  11. Maximum likelihood estimation and the multivariate Bernoulli distribution: An application to reliability

    Energy Technology Data Exchange (ETDEWEB)

    Kvam, P.H.

    1994-08-01

    We investigate systems designed using redundant component configurations. If external events exist in the working environment that cause two or more components in the system to fail within the same demand period, the designed redundancy in the system can be quickly nullified. In the engineering field, such events are called common cause failures (CCFs), and are primary factors in some risk assessments. If CCFs have positive probability, but are not addressed in the analysis, the assessment may contain a gross overestimation of the system reliability. We apply a discrete, multivariate shock model for a parallel system of two or more components, allowing for positive probability that such external events can occur. The methods derived are motivated by attribute data for emergency diesel generators from various US nuclear power plants. Closed form solutions for maximum likelihood estimators exist in many cases; statistical tests and confidence intervals are discussed for the different test environments considered.

  12. Reliability of using nondestructive tests to estimate compressive strength of building stones and bricks

    Directory of Open Access Journals (Sweden)

    Ali Abd Elhakam Aliabdo

    2012-09-01

    Full Text Available This study aims to investigate the relationships between Schmidt hardness rebound number (RN and ultrasonic pulse velocity (UPV versus compressive strength (fc of stones and bricks. Four types of rocks (marble, pink lime stone, white lime stone and basalt and two types of burned bricks and lime-sand bricks were studied. Linear and non-linear models were proposed. High correlations were found between RN and UPV versus compressive strength. Validation of proposed models was assessed using other specimens for each material. Linear models for each material showed good correlations than non-linear models. General model between RN and compressive strength of tested stones and bricks showed a high correlation with regression coefficient R2 value of 0.94. Estimation of compressive strength for the studied stones and bricks using their rebound number and ultrasonic pulse velocity in a combined method was generally more reliable than using rebound number or ultrasonic pulse velocity only.

  13. Lifetime Reliability Estimate and Extreme Permanent Deformations of Randomly Excited Elasto-Plastic Structures

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1983-01-01

    A method is presented for life-time reliability' estimates of randomly excited yielding systems, assuming the structure to be safe, when the plastic deformations are confined below certain limits. The accumulated plastic deformations during any single significant loading history are considered...... to be the outcome of identically distributed, independent stochastic variables,for which a model is suggested. Further assuming the interarrival times of the elementary loading histories to be specified by a Poisson process, and the duration of these to be small compared to the designed life-time, the accumulated...... plastic deformation during several loadings can be modelled as a filtered Poisson process. Using the Markov property of this quantity the considered first-passage problem as well as the related extreme distribution problems are then solved numerically, and the results are compared to simulation studies....

  14. Reliability of third molar development for age estimation in Gujarati population: A comparative study.

    Science.gov (United States)

    Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema

    2015-01-01

    Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.

  15. Reliability of Nationwide Prevalence Estimates of Dementia: A Critical Appraisal Based on Brazilian Surveys.

    Directory of Open Access Journals (Sweden)

    Flávio Chaimowicz

    Full Text Available The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil.We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815. Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%.The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.

  16. Reliability of Nationwide Prevalence Estimates of Dementia: A Critical Appraisal Based on Brazilian Surveys.

    Science.gov (United States)

    Chaimowicz, Flávio; Burdorf, Alex

    2015-01-01

    The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil. We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815). Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%. The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.

  17. Extra petals in the buttercup (Ranunculus repens) provide a quick method to estimate the age of meadows.

    Science.gov (United States)

    Warren, John

    2009-09-01

    There is a widely used crude method to estimate the age of hedgerows (Hooper's rule) based on species' richness. The aim of this study was to try and establish a similar field method for estimating the age of grasslands based on the accumulation of macro-somatic mutations. A countrywide survey was carried out by the British public to investigate the relationship between grassland age and the number of Ranunculus repens (creeping buttercup) plants with extra petals. In addition the relationship between grassland age and R. repens pollen viability was also investigated. Each plant with flowers with additional petals in a sample of 100 was found to equate to approx. 7 years. A higher significant correlation was observed between pollen viability and population age; however, this is not amenable to providing field estimates. The age of British grasslands can be easily and reliably estimated in the field by counting the number flowers with additional petals in R. repens in meadows up to 200 years old. An attempt to estimate the heritability of extra petals suggests that the phenotype results from the slow accumulation of somatic mutations in a species that primarily reproduces vegetatively.

  18. Teaching Confirmatory Factor Analysis to Non-Statisticians: A Case Study for Estimating Composite Reliability of Psychometric Instruments.

    Science.gov (United States)

    Gajewski, Byron J; Jiang, Yu; Yeh, Hung-Wen; Engelman, Kimberly; Teel, Cynthia; Choi, Won S; Greiner, K Allen; Daley, Christine Makosky

    2014-01-01

    Texts and software that we are currently using for teaching multivariate analysis to non-statisticians lack in the delivery of confirmatory factor analysis (CFA). The purpose of this paper is to provide educators with a complement to these resources that includes CFA and its computation. We focus on how to use CFA to estimate a "composite reliability" of a psychometric instrument. This paper provides guidance for introducing, via a case-study, the non-statistician to CFA. As a complement to our instruction about the more traditional SPSS, we successfully piloted the software R for estimating CFA on nine non-statisticians. This approach can be used with healthcare graduate students taking a multivariate course, as well as modified for community stakeholders of our Center for American Indian Community Health (e.g. community advisory boards, summer interns, & research team members). The placement of CFA at the end of the class is strategic and gives us an opportunity to do some innovative teaching: (1) build ideas for understanding the case study using previous course work (such as ANOVA); (2) incorporate multi-dimensional scaling (that students already learned) into the selection of a factor structure (new concept); (3) use interactive data from the students (active learning); (4) review matrix algebra and its importance to psychometric evaluation; (5) show students how to do the calculation on their own; and (6) give students access to an actual recent research project.

  19. Can deuterium tracing be used for reliably estimating water use of tropical trees and bamboo?

    Science.gov (United States)

    Schwendenmann, Luitgard; Dierick, Diego; Köhler, Michael; Hölscher, Dirk

    2010-07-01

    Reliable estimates of water use by trees and other woody plants are crucial for an improved understanding of plant physiology and for water resource management. Since the 1980s, the thermal dissipation probe (TDP) method has been widely applied in trees, proved to be fairly accurate but is challenging in remote areas. Also in the 1980s, the deuterium (D(2)O or deuterium oxide) tracing method was proposed, which so far has less often been applied. However, deuterium tracing requires less sophisticated equipment in the field and new analytical methods reduce costs and increase sample throughput. The objectives of this study were (i) to compare plant water use estimates of the TDP and D(2)O method and (ii) to determine whether the D(2)O method is appropriate for assessing absolute magnitudes of plant water use. The two methods were employed on five tropical tree species and a bamboo species growing in a reforestation stand in the Philippines and an agroforestry system in Indonesia. For bamboo, an increase in D(2)O values in neighbouring, non-labelled culms suggests that injected D(2)O was partly redistributed among culms, which would seriously limit the accurate estimation of water use for the target culm. For trees, water use estimates resulting from the D(2)O tracing method were proportional to the TDP results (r(2) = 0.85, P < 0.001), but absolute values were, on average, about seven times higher. This overestimation may be due to the assumptions underlying the D(2)O tracing method, such as the conservation of tracer mass, not being met. Further, it cannot be excluded that underestimation of water use by the TDP method contributed partly to the observed difference. However, when considering known sources of error, a large part of the observed difference remains unexplained. Based on our results, the use of the D(2)O tracing method cannot be recommended without further experimental testing if absolute values of whole-plant water use are a major goal. However, the D

  20. Reliable Estimates of Mantle Water-Activity from Amphibole-Bearing Rocks

    Science.gov (United States)

    Lamb, W. M.; Popp, R. K.

    2005-12-01

    robust determination of the temperature of mineral equilibration is from 2-pyroxene thermometry. If this is the case then amphibole equilibria from assemblage A yields the most reliable estimate of aH2O (0.004 to 0.025). Another method for determining aH2O from amphibole equilibria relies on the equilibrium between iron oxy-component and hydroxy-component, as described in the dehydrogenation/ oxidation reaction Fe2+ + OH- = Fe3+ + O2- + 1/2 H2. The proportions of iron oxy- and hydroxy-component in amphibole defines the equilibrium fH2 at fixed T and P (Popp et al., 1995, 2004). Using the amphibole composition from the Dish Hill sample, combined with an fO2 estimated from olivine-spinel oxygen barometry, the activity of H2O was calculated over the range of relevant P-T conditions. Activities of H2O in the range 0.01 to 0.02 occur in the same P-T region as those defined by the dehydration equilibria. Thus, the concordance of these three different approaches, 2-pyroxene thermometry, amphibole dehydration equilibria, and amphibole iron oxy-hydroxy equilibrium, yields a reliable aH2O estimate of approx. 0.015 for this assemblage.

  1. Determinants of the reliability of ultrasound tomography sound speed estimates as a surrogate for volumetric breast density

    Energy Technology Data Exchange (ETDEWEB)

    Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov [Department of Health and Human Services, Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States); Sak, Mark A.; Bey-Knight, Lisa [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 (United States); Duric, Nebojsa; Littrup, Peter [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 and Delphinus Medical Technologies, 46701 Commerce Center Drive, Plymouth, Michigan 48170 (United States); Ali, Haythem; Vallieres, Patricia [Henry Ford Health System, 2799 W Grand Boulevard, Detroit, Michigan 48202 (United States); Sherman, Mark E. [Division of Cancer Prevention, National Cancer Institute, Department of Health and Human Services, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States)

    2015-10-15

    Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.

  2. Estimating sleep from multisensory armband measurements: validity and reliability in teens.

    Science.gov (United States)

    Roane, Brandy M; Van Reen, Eliza; Hart, Chantelle N; Wing, Rena; Carskadon, Mary A

    2015-12-01

    Given the recognition that sleep may influence obesity risk, there is increasing interest in measuring sleep parameters within obesity studies. The goal of the current analyses was to determine whether the SenseWear(®) Pro3 Armband (armband), typically used to assess physical activity, is reliable at assessing sleep parameters. The armband was compared with the AMI Motionlogger(®) (actigraph), a validated activity monitor for sleep assessment, and with polysomnography, the gold standard for assessing sleep. Participants were 20 adolescents (mean age = 15.5 years) with a mean body mass index percentile of 63.7. All participants wore the armband and actigraph on their non-dominant arm while in-lab during a nocturnal polysomnographic recording (600 min). Epoch-by-epoch sleep/wake data and concordance of sleep parameters were examined. No significant sleep parameter differences were found between the armband and polysomnography; the actigraph tended to overestimate sleep and underestimate wake compared with polysomnography. Both devices showed high sleep sensitivity, but lower wake detection rates. Bland-Altman plots showed large individual differences in armband sleep parameter concordance rates. The armband did well estimating sleep overall, with group results more similar to polysomnography than the actigraph; however, the armband was less accurate at an individual level than the actigraph. © 2015 European Sleep Research Society.

  3. Do group-specific equations provide the best estimates of stature?

    Science.gov (United States)

    Albanese, John; Osley, Stephanie E; Tuck, Andrew

    2016-04-01

    An estimate of stature can be used by a forensic anthropologist with the preliminary identification of an unknown individual when human skeletal remains are recovered. Fordisc is a computer application that can be used to estimate stature; like many other methods it requires the user to assign an unknown individual to a specific group defined by sex, race/ancestry, and century of birth before an equation is applied. The assumption is that a group-specific equation controls for group differences and should provide the best results most often. In this paper we assess the utility and benefits of using group-specific equations to estimate stature using Fordisc. Using the maximum length of the humerus and the maximum length of the femur from individuals with documented stature, we address the question: Do sex-, race/ancestry- and century-specific stature equations provide the best results when estimating stature? The data for our sample of 19th Century White males (n=28) were entered into Fordisc and stature was estimated using 22 different equation options for a total of 616 trials: 19th and 20th Century Black males, 19th and 20th Century Black females, 19th and 20th Century White females, 19th and 20th Century White males, 19th and 20th Century any, and 20th Century Hispanic males. The equations were assessed for utility in any one case (how many times the estimated range bracketed the documented stature) and in aggregate using 1-way ANOVA and other approaches. This group-specific equation that should have provided the best results was outperformed by several other equations for both the femur and humerus. These results suggest that group-specific equations do not provide better results for estimating stature while at the same time are more difficult to apply because an unknown must be allocated to a given group before stature can be estimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Software project estimation the fundamentals for providing high quality information to decision makers

    CERN Document Server

    Abran, Alain

    2015-01-01

    Software projects are often late and over-budget and this leads to major problems for software customers. Clearly, there is a serious issue in estimating a realistic, software project budget. Furthermore, generic estimation models cannot be trusted to provide credible estimates for projects as complex as software projects. This book presents a number of examples using data collected over the years from various organizations building software. It also presents an overview of the non-for-profit organization, which collects data on software projects, the International Software Benchmarking Stan

  5. A reliable and accurate portable device for rapid quantitative estimation of iodine content in different types of edible salt.

    Science.gov (United States)

    Yadav, Kapil; Kumar, Rakesh; Chakrabarty, Arijit; Pandav, Chandrakant S

    2015-01-01

    Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI) program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD). The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. The intra- and interassay imprecision for 10 parts per million (ppm), 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.

  6. Using operational data to estimate the reliable yields of water-supply wells

    Science.gov (United States)

    Misstear, Bruce D. R.; Beeson, Sarah

    The reliable yield of a water-supply well depends on many different factors, including the properties of the well and the aquifer; the capacities of the pumps, raw-water mains, and treatment works; the interference effects from other wells; and the constraints imposed by ion licences, water quality, and environmental issues. A relatively simple methodology for estimating reliable yields has been developed that takes into account all of these factors. The methodology is based mainly on an analysis of water-level and source-output data, where such data are available. Good operational data are especially important when dealing with wells in shallow, unconfined, fissure-flow aquifers, where actual well performance may vary considerably from that predicted using a more analytical approach. Key issues in the yield-assessment process are the identification of a deepest advisable pumping water level, and the collection of the appropriate well, aquifer, and operational data. Although developed for water-supply operators in the United Kingdom, this approach to estimating the reliable yields of water-supply wells using operational data should be applicable to a wide range of hydrogeological conditions elsewhere. Résumé La productivité d'un puits capté pour l'adduction d'eau potable dépend de différents facteurs, parmi lesquels les propriétés du puits et de l'aquifère, la puissance des pompes, le traitement des eaux brutes, les effets d'interférences avec d'autres puits et les contraintes imposées par les autorisations d'exploitation, par la qualité des eaux et par les conditions environnementales. Une méthodologie relativement simple d'estimation de la productivité qui prenne en compte tous ces facteurs a été mise au point. Cette méthodologie est basée surtout sur une analyse des données concernant le niveau piézométrique et le débit de prélèvement, quand ces données sont disponibles. De bonnes données opérationnelles sont particuli

  7. RELIABILITY OF MCKERN AND STEWART METHOD FOR ESTIMATING AGE IN MALES : AN AUTOPSY STUDY IN A TEACHING HOSPITAL IN TIRUPATHI

    Directory of Open Access Journals (Sweden)

    Mohan Prasad

    2015-06-01

    Full Text Available INTRODUCTION : Determination of age after 25 years becomes very difficult as all teeth will be erupted and ossification C entre ’ s fused. Studies have shown that the changes in the morphological surface of pubic symphysis are the most reliable to estimate age between 20 to 40 years. We estimated the age based on the morphology of pubic symphysis and assessed the reliability of Mckern - Stewart criteria by comparing with actual age. METHODS : A total number of 80 male pubic bones from the dead bodies in the age group of 18 - 65 y ears were studied. Only bodies with known age, without a history of any disease or deformity of affecting the bones during their life time were included. Age of the deceased was estimated by using pubic symphysis and compared with actual age. RESULTS : The results show that the age can be estimated up to the accuracy of ±2 years up to the age of 30 years, ±6 years above 30 years and is not reliable beyond fourth decade . CONCLUSION : In our study, we observed Mckern - Stewart criteria for estimation of age using pubic symphysis is not reliable beyond 30 years of age. However, caution has to be executed in interpreting our study results as sample size is less

  8. A Comparison of the Approaches of Generalizability Theory and Item Response Theory in Estimating the Reliability of Test Scores for Testlet-Composed Tests

    Science.gov (United States)

    Lee, Guemin; Park, In-Yong

    2012-01-01

    Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several…

  9. reliability reliability

    African Journals Online (AJOL)

    eobe

    In this work, a FORTRAN-based computer computer. Eurocode 2 (EC 2)[1] ... addresses addresses: 1 idrcivil1@yahoo.com, 2 adomaarf1@gmail.com computer computer program was developed to aid the design of reinforced co program was ..... Haldar, A. and Mahadevan, S. Reliability Assessment using Stochastic Finite ...

  10. Global trends in health research and development expenditures--the challenge of making reliable estimates for international comparison.

    Science.gov (United States)

    Young, Alison J; Terry, Robert F; Røttingen, John-Arne; Viergever, Roderik F

    2015-01-24

    Better estimates of changes in the level and structure of national, regional, and global expenditures on health research and development (R&D) are needed as an important source of information for advancing countries' health research policies. However, such estimates are difficult to compile and comparison between countries needs careful calibration. We outline the steps that need to be taken to make reliable estimates of trends in countries' expenditures on health R&D, describe that an ideal approach would involve the use of international sets of deflators and exchange rates that are specific to health R&D activities, and explain which methods should be used given the current absence of such health R&D-specific deflators and exchange rates. Finally, we describe what should be the way forward in improving our ability to make reliable estimates of trends in countries' health R&D expenditures.

  11. Estimation of stature from radiographic measurement of foot dimensions: Truncated foot length may be more reliable than full foot length.

    Science.gov (United States)

    Gwani, Abdullahi Suleiman; Salihu, Abubakar Tijjani; Garba, Isa Sa'idu; Rufa'i, Adamu Ahmad

    2017-02-01

    Foot length has been shown to be a reliable dimension for estimation of stature. However, phalanges of the foot are very small bones and their length may not be proportional to person's stature. Thus, we hypothesized that foot length measured excluding the phalanges, the truncated foot length, may be more reliable in stature estimation than full foot length. This study, therefore, aimed at comparing the accuracy of the regression equations derived from the truncated foot length and the full foot length. The study recruited a sample of 32 young adults (16 males and 16 females) aged from 20 to 35 years. Lateral radiographs of the right feet were obtained for each subject in a bilateral standing position while maintaining equal weight on both feet. Standing height of the participants was measured with a stadiometer. Truncated foot length and full foot length were measured on the lateral radiographs of the foot. Independent t-test was used to check for mean differences in the dimensions between genders. Linear regression analysis was used to determine the equations for stature estimation. Intra and inter-observer reliability were calculated from four precision estimates: absolute technical error of measurement (aTEM), relative technical error of measurement (rTEM), coefficient of reliability (Rr) and coefficient of variation (Cv). All the dimensions measured were significantly larger in males than females. Linear regression equations were derived for estimation of stature using both the truncated foot length and full foot length. The regression equations derived from truncated foot length have larger correlation coefficient, coefficient of determination, adjusted coefficient of determination as well as smaller standard error of estimation than those derived from full foot length. All the precision estimates showed that the measurement errors are within acceptable limits. This study suggests that even if the full foot length is available, excluding the phalanges may

  12. Development of a foraging model framework to reliably estimate daily food consumption by young fishes

    Science.gov (United States)

    Deslauriers, David; Rosburg, Alex J.; Chipps, Steven R.

    2017-01-01

    We developed a foraging model for young fishes that incorporates handling and digestion rate to estimate daily food consumption. Feeding trials were used to quantify functional feeding response, satiation, and gut evacuation rate. Once parameterized, the foraging model was then applied to evaluate effects of prey type, prey density, water temperature, and fish size on daily feeding rate by age-0 (19–70 mm) pallid sturgeon (Scaphirhynchus albus). Prey consumption was positively related to prey density (for fish >30 mm) and water temperature, but negatively related to prey size and the presence of sand substrate. Model evaluation results revealed good agreement between observed estimates of daily consumption and those predicted by the model (r2 = 0.95). Model simulations showed that fish feeding on Chironomidae or Ephemeroptera larvae were able to gain mass, whereas fish feeding solely on zooplankton lost mass under most conditions. By accounting for satiation and digestive processes in addition to handling time and prey density, the model provides realistic estimates of daily food consumption that can prove useful for evaluating rearing conditions for age-0 fishes.

  13. Mixed Matrix Method Provides A Reliable Metabolite Exposure Comparison for Assessment of Metabolites in Safety Testing (MIST).

    Science.gov (United States)

    Takahashi, Ryan H; Khojasteh, Cyrus; Wright, Matthew; Hop, Cornelis E C A; Ma, Shuguang

    2017-11-17

    The regulatory guidances on metabolites in safety testing (MIST) by US Food and Drug Administration (FDA) and International Conference on Harmonisation (ICH) describe the necessity to assess exposures of major circulating metabolites in humans at steady state relative to exposures achieved in nonclinical safety studies prior to the initiation of large scale clinical trials. This comparison can be accomplished by measuring metabolite concentrations in animals and humans with validated bioanalytical methods. However, bioanalysis of metabolites in multiple species and multiple studies is resource intensive and may impact the timelines of clinical studies. A simple, reliable and accurate method has been developed for quantitative assessment of metabolite coverage in preclinical safety species by mixing equal volume of human plasma with blank plasma of animal species and vice versa followed by an analysis using LC-SRM or LC-HRMS. Here, we explored the reliability and accuracy of this method in several development projects at Genentech and compared the results to those obtained from validated bioanalytical methods. The mixed-matrix method provided comparable accuracy (within ±20%) to those obtained from validated bioanalysis but does not require authentic standards or radiolabeled compounds, which could translate to time and resource savings in drug development. Quantitative assessment of metabolite coverage in safety species can be made using mixed matrix method with similar accuracy and scientific rigor to those obtained from validated bioanalytical methods. Moving forward, we are encouraging the industry and regulators to consider accepting the mixed matrix method for assessing metabolite exposure comparisons between humans and animal species used in toxicology studies. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    Science.gov (United States)

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  15. Opportunistic citizen science data of animal species produce reliable estimates of distribution trends if analysed with occupancy models

    NARCIS (Netherlands)

    van Strien, A.J.; van Swaay, C.A.M.; Termaat, T.

    2013-01-01

    Many publications documenting large-scale trends in the distribution of species make use of opportunistic citizen data, that is, observations of species collected without standardized field protocol and without explicit sampling design. It is a challenge to achieve reliable estimates of distribution

  16. A Flexible Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Palm, Daniël W.; van der Ark, L. Andries; Sijtsma, Klaas

    2014-01-01

    The latent class reliability coefficient (LCRC) is improved by using the divisive latent class model instead of the unrestricted latent class model. This results in the divisive latent class reliability coefficient (DLCRC), which unlike LCRC avoids making subjective decisions about the best solution and thus avoids judgment error. A computational…

  17. Estimation of the critical effect level for pollution prevention based on oyster embryonic development toxicity test: the search for reliability.

    Science.gov (United States)

    da Cruz, A C S; Couto, B C; Nascimento, I A; Pereira, S A; Leite, M B N L; Bertoletti, E; Zagatto, P

    2007-05-01

    In spite of the consideration that toxicity testing is a reduced approach to measure the effects of pollutants on ecosystems, the early-life-stage (ELS) tests have evident ecological relevance because they reflect the possible reproductive impairment of the natural populations. The procedure and validation of Crassostrea rhizophorae embryonic development test have shown that it meets the same precision as other U.S. EPA tests, where EC(50) is generally used as a toxicological endpoint. However, the recognition that EC(50) is not the best endpoint to assess contaminant effects led U.S. EPA to recently suggest EC(25) as an alternative to estimate xenobiotic effects for pollution prevention. To provide reliability to the toxicological test results on C. rhizophorae embryos, the present work aimed to establish the critical effect level for this test organism, based on its reaction to reference toxicants, by using the statistical method proposed by Norberg-King (Inhibition Concentration, version 2.0). Oyster embryos were exposed to graded series of reference toxicants (ZnSO(4) x 7H(2)O; AgNO(3); KCl; CdCl(2)H(2)O; phenol, 4-chlorophenol and dodecyl sodium sulphate). Based on the obtained results, the critical value for C. rhizophorae embryonic development test was estimated as EC(15). The present research enhances the emerging consensus that ELS tests data would be adequate for estimating the chronic safe concentrations of pollutants in the receiving waters. Based on recommended criteria and on the results of the present research, zinc sulphate and 4-chlorophenol have been pointed out, among the inorganic and organic compounds tested, as the best reference toxicants for C. rhizophorae ELS-test.

  18. Linear Interaction Energy Based Prediction of Cytochrome P450 1A2 Binding Affinities with Reliability Estimation.

    Directory of Open Access Journals (Sweden)

    Luigi Capoferri

    Full Text Available Prediction of human Cytochrome P450 (CYP binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD simulations and Linear Interaction Energy (LIE theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE of 4.1 kJ mol-1 and a standard error in prediction (SDEP in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units.

  19. Theoretical basis, application, reliability, and sample size estimates of a Meridian Energy Analysis Device for Traditional Chinese Medicine Research.

    Science.gov (United States)

    Tsai, Ming-Yen; Chen, Shih-Yu; Lin, Chung-Chun

    2017-04-01

    The Meridian Energy Analysis Device is currently a popular tool in the scientific research of meridian electrophysiology. In this field, it is generally believed that measuring the electrical conductivity of meridians provides information about the balance of bioenergy or Qi-blood in the body. PubMed database based on some original articles from 1956 to 2014 and the authoŕs clinical experience. In this short communication, we provide clinical examples of Meridian Energy Analysis Device application, especially in the field of traditional Chinese medicine, discuss the reliability of the measurements, and put the values obtained into context by considering items of considerable variability and by estimating sample size. The Meridian Energy Analysis Device is making a valuable contribution to the diagnosis of Qi-blood dysfunction. It can be assessed from short-term and long-term meridian bioenergy recordings. It is one of the few methods that allow outpatient traditional Chinese medicine diagnosis, monitoring the progress, therapeutic effect and evaluation of patient prognosis. The holistic approaches underlying the practice of traditional Chinese medicine and new trends in modern medicine toward the use of objective instruments require in-depth knowledge of the mechanisms of meridian energy, and the Meridian Energy Analysis Device can feasibly be used for understanding and interpreting traditional Chinese medicine theory, especially in view of its expansion in Western countries.

  20. Reliability of different mark-recapture methods for population size estimation tested against reference population sizes constructed from field data.

    Directory of Open Access Journals (Sweden)

    Annegret Grimm

    Full Text Available Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK. If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2. Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to

  1. Application of generalizability theory in estimating the reliability of ankle-complex laxity measurement.

    Science.gov (United States)

    Heitman, Robert J; Kovaleski, John E; Pugh, Steven F

    2009-01-01

    Generalizability theory is an appropriate method for determining the reliability of measurements obtained across more than a single facet. In the clinical and research settings, ankle-complex laxity assessment may be performed using different examiners and multiple trials. To determine the reliability of ankle-complex laxity measurements across different examiners and multiple trials using generalizability theory. Correlational study. Laboratory. Forty male university students without a history of ankle injury. Measures of right ankle-complex anteroposterior and inversion-eversion laxity were obtained by 2 examiners. Each examiner performed 2 anteroposterior trials, followed by 2 inversion-eversion trials for each ankle at 0 degrees of ankle flexion. Using generalizability theory, we performed G study and D study analyses. More measurement error was found for facets associated with examiner than with trial for both anteroposterior and inversion-eversion laxity. Inversion-eversion measurement was more reliable than anteroposterior laxity measurement. Although 1 examiner and 1 trial had acceptable reliability (G coefficient >/= .848), increasing the number of examiners increased reliability to a greater extent than did increasing the number of trials. Within the range of examiner and trial facets studied, any combination of examiners or trials (or both) above 1 can change ankle laxity measurement reliability from acceptable (1 examiner, 1 trial) to highly reliable (3 examiners, 3 trials). Individuals may respond to examiners and their procedural nuances differently; thus, standardized procedures are important.

  2. Moving Beyond Blind Men and Elephants: Providing Total Estimated Annual Costs Improves Health Insurance Decision Making.

    Science.gov (United States)

    Barnes, Andrew J; Hanoch, Yaniv; Rice, Thomas; Long, Sharon K

    2017-10-01

    Health insurance is among the most important financial and health-related decisions that people make. Choosing a health insurance plan that offers sufficient risk protection is difficult, in part because total expected health care costs are not transparent. This study examines the effect of providing total costs estimates on health insurance decisions using a series of hypothetical choice experiments given to 7,648 individuals responding to the fall 2015 Health Reform Monitoring Survey. Participants were given two health scenarios presented in random order asking which of three insurance plans would best meet their needs. Half received total estimated costs, which increased the probability of choosing a cost-minimizing plan by 3.0 to 10.6 percentage points, depending on the scenario ( p < .01). With many consumers choosing or failing to switch out of plans that offer insufficient coverage, incorporating insights on consumer decision making with personalized information to estimate costs can improve the quality of health insurance choices.

  3. Low-magnification image analysis of Giemsa stained, electroporation and bleomycin treated endothelial monolayers provides reliable monolayer integrity data.

    Science.gov (United States)

    Meulenberg, Cécil J W; Cemazar, Maja

    2014-06-01

    The aim of this study was to develop an in vitro cell model for studying the in vivo observed vascular effect, induced by exposing blood vessels to changing electric field strengths. Human microvascular endothelial cells (HMEC-1) were cultured as monolayers on 8 chamber glass slides as a model of capillary wall. Exposed to electric pulses alone, or in the presence of bleomycin (electrochemotherapy), monolayers were incubated with culture medium, fixed with methanol, stained with Giemsa, and photographed. Images of high-contrast low-magnification monolayers made under identical optimal light exposure were converted to greyscale, and the use of a threshold tool yielded a binary distribution, from which we determined two parameters of monolayer integrity: the covered surface area and the number of cells. We show that this low-magnification image analysis method for attached endothelial cells provides reliable control parameters of monolayer integrity, representing capillary wall. Besides, already within 2h post-treatment the data show distinct effects in the monolayer integrity parameters for electric pulses alone, or in the presence of bleomycin. The present method can be readily introduced to short and long-term toxicity assays with a variety of treatment conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Calibrating Climate Change Time-Slice Projections with Estimates of Seasonal Forecast Reliability

    OpenAIRE

    Matsueda, M.; Weisheimer, A.; Palmer, T.N

    2016-01-01

    In earlier work, it was proposed that the reliability of climate change projections, particularly of regional rainfall, could be improved if such projections were calibrated using quantitative measures of reliability obtained by running the same model in seasonal forecast mode. This proposal is tested for fast atmospheric processes (such as clouds and convection) by considering output from versions of the same atmospheric general circulation model run at two different resolutions and forced w...

  5. Internationalizing the Broselow Tape: How Reliable Is Weight Estimation in Indian Children

    National Research Council Canada - National Science Library

    Ramarajan, Naresh; Krishnamoorthi, Rajesh; Strehlow, Matthew; Quinn, James; Mahadevan, Swaminatha V

    2008-01-01

    Objectives:  The Broselow pediatric emergency weight estimation tape is an accurate method of estimating children's weights based on height-weight correlations and determining standardized medication dosages...

  6. The probability estimation of the electronic lesson implementation taking into account software reliability

    Science.gov (United States)

    Gurov, V. V.

    2017-01-01

    Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.

  7. How Reliable Are Heat Pulse Velocity Methods for Estimating Tree Transpiration?

    Directory of Open Access Journals (Sweden)

    Michael A. Forster

    2017-09-01

    Full Text Available Transpiration is a significant component of the hydrologic cycle and its accurate quantification is critical for modelling, industry, and policy decisions. Sap flow sensors provide a low cost and practical method to measure transpiration. Various methods to measure sap flow are available and a popular family of methods is known as heat pulse velocity (HPV. Theory on thermal conductance and convection, that underpins HPV methods, suggests transpiration can be directly estimated from sensor measurements without the need for laborious calibrations. To test this accuracy, transpiration estimated from HPV sensors is compared with an independent measure of plant water use such as a weighing lysimeter. A meta-analysis of the literature that explicitly tested the accuracy of a HPV sensors against an independent measure of transpiration was conducted. Data from linear regression analysis was collated where an R2 of 1 indicates perfect precision and a slope of 1 of the linear regression curve indicates perfect accuracy. The average R2 and slope from all studies was 0.822 and 0.860, respectively. However, the overall error, or deviation from real transpiration values, was 34.706%. The results indicate that HPV sensors are precise in correlating heat velocity with rates of transpiration, but poor in quantifying transpiration. Various sources of error in converting heat velocity into sap velocity and sap flow are discussed including probe misalignment, wound corrections, thermal diffusivity, stem water content, placement of sensors in sapwood, and scaling of point measurements to whole plants. Where whole plant water use or transpiration is required in a study, it is recommended that all sap flow sensors are calibrated against an independent measure of transpiration.

  8. Transcutaneous PTCCO2 measurement in combination with arterial blood gas analysis provides superior accuracy and reliability in ICU patients.

    Science.gov (United States)

    Spelten, Oliver; Fiedler, Fritz; Schier, Robert; Wetsch, Wolfgang A; Hinkelbein, Jochen

    2017-02-01

    Hyper or hypoventilation may have serious clinical consequences in critically ill patients and should be generally avoided, especially in neurosurgical patients. Therefore, monitoring of carbon dioxide partial pressure by intermittent arterial blood gas analysis (PaCO2) has become standard in intensive care units (ICUs). However, several additional methods are available to determine PCO2 including end-tidal (PETCO2) and transcutaneous (PTCCO2) measurements. The aim of this study was to compare the accuracy and reliability of different methods to determine PCO2 in mechanically ventilated patients on ICU. After approval of the local ethics committee PCO2 was determined in n = 32 ICU consecutive patients requiring mechanical ventilation: (1) arterial PaCO2 blood gas analysis with Radiometer ABL 625 (ABL; gold standard), (2) arterial PaCO2 analysis with Immediate Response Mobile Analyzer (IRMA), (3) end-tidal PETCO2 by a Propaq 106 EL monitor and (4) transcutaneous PTCCO2 determination by a Tina TCM4. Bland-Altman method was used for statistical analysis; p analysis revealed good correlation between PaCO2 by IRMA and ABL (R2 = 0.766; p analysis revealed a bias and precision of 2.0 ± 3.7 mmHg for the IRMA, 2.2 ± 5.7 mmHg for transcutaneous, and -5.5 ± 5.6 mmHg for end-tidal measurement. Arterial CO2 partial pressure by IRMA (PaCO2) and PTCCO2 provided greater accuracy compared to the reference measurement (ABL) than the end-tidal CO2 measurements in critically ill in mechanically ventilated patients patients.

  9. Estimating the national cost of treating people with HIV disease: patient, payer, and provider data.

    Science.gov (United States)

    Hellinger, F J; Fleishman, J A

    2000-06-01

    Existing estimates of the national cost of treating all people with HIV disease use data from a sample of people with HIV disease to extrapolate the cost of treating all people with HIV disease (patient-based approach). This study derives estimates using two novel approaches (i.e., payer-based and provider-based) and compares these with existing estimates. These include the Health Insurance Association of American and the American Council of Life Insurance 1996 HIV survey, the 1996 State Inpatient Databases (SID) maintained by the Agency for Healthcare Research and Quality, and the IMS America Ltd. survey of independent and chain drugstores. The cost of treating all people with HIV disease in 1996 was between $6.7 and $7.8 billion U.S., and the average annual cost of treating a person with HIV disease was between $20,000 and $24,700 U.S. Analysts should derive estimates of the cost of treating people with HIV disease using several different approaches. K

  10. The juvenile face as a suitable age indicator in child pornography cases: a pilot study on the reliability of automated and visual estimation approaches.

    Science.gov (United States)

    Ratnayake, M; Obertová, Z; Dose, M; Gabriel, P; Bröker, H M; Brauckmann, M; Barkus, A; Rizgeliene, R; Tutkuviene, J; Ritz-Timme, S; Marasciuolo, L; Gibelli, D; Cattaneo, C

    2014-09-01

    In cases of suspected child pornography, the age of the victim represents a crucial factor for legal prosecution. The conventional methods for age estimation provide unreliable age estimates, particularly if teenage victims are concerned. In this pilot study, the potential of age estimation for screening purposes is explored for juvenile faces. In addition to a visual approach, an automated procedure is introduced, which has the ability to rapidly scan through large numbers of suspicious image data in order to trace juvenile faces. Age estimations were performed by experts, non-experts and the Demonstrator of a developed software on frontal facial images of 50 females aged 10-19 years from Germany, Italy, and Lithuania. To test the accuracy, the mean absolute error (MAE) between the estimates and the real ages was calculated for each examiner and the Demonstrator. The Demonstrator achieved the lowest MAE (1.47 years) for the 50 test images. Decreased image quality had no significant impact on the performance and classification results. The experts delivered slightly less accurate MAE (1.63 years). Throughout the tested age range, both the manual and the automated approach led to reliable age estimates within the limits of natural biological variability. The visual analysis of the face produces reasonably accurate age estimates up to the age of 18 years, which is the legally relevant age threshold for victims in cases of pedo-pornography. This approach can be applied in conjunction with the conventional methods for a preliminary age estimation of juveniles depicted on images.

  11. From network reliability to the Ising model: A parallel scheme for estimating the joint density of states.

    Science.gov (United States)

    Ren, Yihui; Eubank, Stephen; Nath, Madhurima

    2016-10-01

    Network reliability is the probability that a dynamical system composed of discrete elements interacting on a network will be found in a configuration that satisfies a particular property. We introduce a reliability property, Ising feasibility, for which the network reliability is the Ising model's partition function. As shown by Moore and Shannon, the network reliability can be separated into two factors: structural, solely determined by the network topology, and dynamical, determined by the underlying dynamics. In this case, the structural factor is known as the joint density of states. Using methods developed to approximate the structural factor for other reliability properties, we simulate the joint density of states, yielding an approximation for the partition function. Based on a detailed examination of why naïve Monte Carlo sampling gives a poor approximation, we introduce a parallel scheme for estimating the joint density of states using a Markov-chain Monte Carlo method with a spin-exchange random walk. This parallel scheme makes simulating the Ising model in the presence of an external field practical on small computer clusters for networks with arbitrary topology with ∼10^{6} energy levels and more than 10^{308} microstates.

  12. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  13. Reliability of functional and predictive methods to estimate the hip joint centre in human motion analysis in healthy adults.

    Science.gov (United States)

    Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P

    2017-03-01

    In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Attenuation of the Squared Canonical Correlation Coefficient under Varying Estimates of Score Reliability

    Science.gov (United States)

    Wilson, Celia M.

    2010-01-01

    Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…

  15. R&D program benefits estimation: DOE Office of Electricity Delivery and Energy Reliability

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2006-12-04

    The overall mission of the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE) is to lead national efforts to modernize the electric grid, enhance the security and reliability of the energy infrastructure, and facilitate recovery from disruptions to the energy supply. In support of this mission, OE conducts a portfolio of research and development (R&D) activities to advance technologies to enhance electric power delivery. Multiple benefits are anticipated to result from the deployment of these technologies, including higher quality and more reliable power, energy savings, and lower cost electricity. In addition, OE engages State and local government decision-makers and the private sector to address issues related to the reliability and security of the grid, including responding to national emergencies that affect energy delivery. The OE R&D activities are comprised of four R&D lines: High Temperature Superconductivity (HTS), Visualization and Controls (V&C), Energy Storage and Power Electronics (ES&PE), and Distributed Systems Integration (DSI).

  16. A reliability estimation tool for reducing infant mortality in Cubesat missions

    NARCIS (Netherlands)

    Langer, Martin; Weisgerber, Michael; Bouwmeester, J.; Hoehn, Alexander

    2017-01-01

    For many years, traditional satellite design philosophy was dominated by highly reliable components, conservative designs and extensive performance testing at subsystem and integrated system levels to achieve long lifetimes in the harsh space environment. CubeSats attempted to choose a different

  17. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Directory of Open Access Journals (Sweden)

    Annie Haakenstad

    Full Text Available Faith-based organizations (FBOs have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH channeled through these organizations.Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs.In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs.Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  18. Trunk-acceleration based assessment of gait parameters in older persons: a comparison of reliability and validity of four inverted pendulum based estimations.

    Science.gov (United States)

    Zijlstra, Agnes; Zijlstra, Wiebren

    2013-09-01

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Unbiased Estimators of Ability Parameters, of Their Variance, and of Their Parallel-Forms Reliability.

    Science.gov (United States)

    Lord, Frederic M.

    This paper is primarily concerned with determining the statistical bias in the maximum likelihood estimate of the examinee ability parameter in item response theory, and of certain functions of such parameters. Given known item parameters, unbiased estimators are derived for (1) an examinee's ability parameter and proportion-correct true score;…

  20. RESEARCH AND ESTIMATION OF RELIABILITY SYSTEMS OF TRANSPORT OF RADIONUCLIDES IN LOCAL AGROECOSYSTEM

    Directory of Open Access Journals (Sweden)

    І. Матвєєва

    2011-02-01

    Full Text Available These earlier results to assess the distribution and redistribution of radionuclides in the agroecosystem (for example.v. Galouzeau, Volynskaya region showed a significant dose formation dynamics. For evaluation and prediction of such processes we proposed to use the model and the theory of reliability. For this agroecosystem is considered as a system of transport of radionuclides from the agroecosystem to population. Proposed quantitative evaluation methods of reliability of individual elements of the agroecosystem and the  entire agroecosystem as a whole. This method and the model allowed for a fresh look at the problem of environmental ecological safety and address the problems of protective countermeasures

  1. Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation

    Science.gov (United States)

    Tobon-Gomez, Catalina; Sukno, Federico M.; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F.

    2012-07-01

    Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18% LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.

  2. Estimating the impact on health of poor reliability of drinking water interventions in developing countries.

    Science.gov (United States)

    Hunter, Paul R; Zmirou-Navier, Denis; Hartemann, Philippe

    2009-04-01

    Recent evidence suggests that many improved drinking water supplies suffer from poor reliability. This study investigates what impact poor reliability may have on achieving health improvement targets. A Quantitative Microbiological Risk Assessment was conducted of the impact of interruptions in water supplies that forced people to revert to drinking raw water. Data from the literature were used to construct models on three waterborne pathogens common in Africa: Rotavirus, Cryptosporidium and Enterotoxigenic E. coli. Risk of infection by the target pathogens is substantially greater on days that people revert to raw water consumption. Over the course of a few days raw water consumption, the annual health benefits attributed to consumption of water from an improved supply will be almost all lost. Furthermore, risk of illness on days drinking raw water will fall substantially on very young children who have the highest risk of death following infection. Agencies responsible for implementing improved drinking water provision will not make meaningful contributions to public health targets if those systems are subject to poor reliability. Funders of water quality interventions in developing countries should put more effort into auditing whether interventions are sustainable and whether the health benefits are being achieved.

  3. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Science.gov (United States)

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  4. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Directory of Open Access Journals (Sweden)

    Monica C Junkes

    Full Text Available The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version.After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30 were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes.The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect. In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593 and income (rs = 0.327 and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis.The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  5. Reliable Quantification of the Potential for Equations Based on Spot Urine Samples to Estimate Population Salt Intake

    DEFF Research Database (Denmark)

    Huang, Liping; Crino, Michelle; Wu, Jason Hy

    2016-01-01

    population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effects of factors such as ethnicity, sex, age, body mass index, antihypertensive drug use, health status......BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean......, and timing of spot urine collection will be explored. The capacity of spot urine samples to measure change in salt intake over time will also be determined. Finally, we aim to develop a novel equation (or equations) that performs better than existing equations to estimate mean population salt intake. METHODS...

  6. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  7. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2007-01-01

    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...... and the heave acceleration. Resonance excitation, parametric roll and forced roll are all included in the model, albeit with some simplifications. The result is the mean out-crossing rate of the roll angle together with the corresponding most probable wave scenarios (critical wave episodes), leading to user...

  8. Smoking duration alone provides stronger risk estimates of chronic obstructive pulmonary disease than pack-years.

    Science.gov (United States)

    Bhatt, Surya P; Kim, Young-Il; Harrington, Kathy F; Hokanson, John E; Lutz, Sharon M; Cho, Michael H; DeMeo, Dawn L; Wells, James M; Make, Barry J; Rennard, Stephen I; Washko, George R; Foreman, Marilyn G; Tashkin, Donald P; Wise, Robert A; Dransfield, Mark T; Bailey, William C

    2018-01-11

    Cigarette smoking is the strongest risk factor for COPD. Smoking burden is frequently measured in pack-years, but the relative contribution of cigarettes smoked per day versus duration towards the development of structural lung disease, airflow obstruction and functional outcomes is not known. We analysed cross-sectional data from a large multicentre cohort (COPDGene) of current and former smokers. Primary outcome was airflow obstruction (FEV 1 /FVC); secondary outcomes included five additional measures of disease: FEV 1 , CT emphysema, CT gas trapping, functional capacity (6 min walk distance, 6MWD) and respiratory morbidity (St George's Respiratory Questionnaire, SGRQ). Generalised linear models were estimated to compare the relative contribution of each smoking variable with the outcomes, after adjustment for age, race, sex, body mass index, CT scanner, centre, age of smoking onset and current smoking status. We also estimated adjusted means of each outcome by categories of pack-years and combined groups of categorised smoking duration and cigarettes/day, and estimated linear trends of adjusted means for each outcome by categorised cigarettes/day, smoking duration and pack-years. 10 187 subjects were included. For FEV 1 /FVC, standardised beta coefficient for smoking duration was greater than for cigarettes/day and pack-years (P<0.001). After categorisation, there was a linear increase in adjusted means FEV 1 /FVC with increase in pack-years (regression coefficient β=-0.023±SE0.003; P=0.003) and duration over all ranges of smoking cigarettes/day (β=-0.041±0.004; P<0.001) but a relatively flat slope for cigarettes/day across all ranges of smoking duration (β=-0.009±0.0.009; P=0.34). Strength of association of duration was similarly greater than pack-years for emphysema, gas trapping, FEV 1 , 6MWD and SGRQ. Smoking duration alone provides stronger risk estimates of COPD than the composite index of pack-years. Post-results; NCT00608764. © Article author

  9. Inaccuracy and bias in adult skeletal age estimation: Assessing the reliability of eight methods on individuals of varying body sizes.

    Science.gov (United States)

    Merritt, Catherine E

    2017-06-01

    Accurate age estimations are essential for identifying human skeletal remains and narrowing missing persons searches. This study examines how BMI, body mass, and stature influence inaccuracy and bias in adult skeletal age estimations obtained using eight methods. 746 skeletons from the Hamann-Todd and William Bass Collections were used. Underweight BMI, light body mass, and short-stature individuals have the most error associated with their age estimates and are consistently under-aged between 3 to 13years. Obese BMI, heavy body mass, and tall-stature individuals are consistently over-aged between 3 to 8.5years. The most reliable methods for smaller-bodied individuals are Kunos et al. (first rib) and Buckberry-Chamberlain (auricular surface); for individuals in the average range, İşcan et al. (fourth ribs) and Passalacqua (sacrum); and for larger-bodied individuals, İşcan et al., Passalacqua, and Rougé-Maillart et al. (auricular surface and acetabulum). Lovejoy et al. (auricular surface) and Suchey-Brooks (pubic symphysis) produce consistent inaccuracy and bias scores across all body size groups. The least reliable method for smaller-bodied individuals is İşcan et al.; for larger-bodied individuals, Buckberry-Chamberlain; and across all body size groups, DiGangi et al. (first rib). Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Algorithm for post-clustering curation of DNA amplicon data yields reliable biodiversity estimates

    DEFF Research Database (Denmark)

    Frøslev, Tobias Guldberg; Kjøller, Rasmus; Bruun, Hans Henrik

    2017-01-01

    soil from 130 sites in Denmark spanning major environmental gradients. OTU tables are produced with several different OTU definition algorithms and subsequently curated with LULU, and validated against field survey data. LULU curation consistently improves α-diversity estimates and other biodiversity...

  11. Reliability of estimating the room volume from a single room impulse response

    NARCIS (Netherlands)

    Kuster, M.

    2008-01-01

    The methods investigated for the room volume estimation are based on geometrical acoustics, eigenmode, and diffuse field models and no data other than the room impulse response are available. The measurements include several receiver positions in a total of 12 rooms of vastly different sizes and

  12. Systematic review of survival time in experimental mouse stroke with impact on reliability of infarct estimation

    DEFF Research Database (Denmark)

    Klarskov, Carina Kirstine; Klarskov, Mikkel Buster; Hasseldam, Henrik

    2016-01-01

    and a comparison of correction methods will be discussed. Method: Relevant terms were searched in the National Library of Medicine PubMed database. A method for classification of infarct measurement methods was made using a naming convention. Conclusion: Our study shows that infarct size estimations are often...

  13. Limits to the reliability of size-based fishing status estimation for data-poor stocks

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders

    2015-01-01

    in more than 60% of the cases, and almost always correctly assess whether a stock is subject to overfishing. Adding information about age, i.e., assuming that growth rate and asymptotic size are known, does not improve the estimation. Only knowledge of the ratio between mortality and growth led...

  14. Reliability-based weighting of visual and vestibular cues in displacement estimation

    NARCIS (Netherlands)

    Horst, A.C. ter; Koppen, M.G.M.; Selen, L.P.J.; Medendorp, W.P.

    2015-01-01

    When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading

  15. A reduced factor structure for the PROQOL-HIV questionnaire provided reliable indicators of health-related quality of life.

    Science.gov (United States)

    Lalanne, Christophe; Chassany, Olivier; Carrieri, Patrizia; Marcellin, Fabienne; Armstrong, Andrew R; Lert, France; Spire, Bruno; Dray-Spira, Rosemary; Duracinsky, Martin

    2016-04-01

    To identify a simplified factor structure for the PROQOL-human immunodeficiency virus (HIV) questionnaire to improve the measurement of the health-related quality of life (HRQL) of HIV-positive patients in clinical care and research settings. HRQL data were collected using the eight-dimension PROQOL-HIV questionnaire from 2,537 patients (VESPA2 study). Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) validated a simpler four-factor structure and assessed measurement invariance (MI). Multigroup analysis assessed the effect of sex, age, and antiretroviral therapy (ART) on the resulting factor scores. Correlations with symptom and Short Form (SF)-12 self-reports assessed convergent validity. Item analysis, EFA, and CFAs confirmed the validity [comparative fit index (CFI), 0.948; root mean square error of approximation, 0.064] and reliability (α's ≥ 0.8) of four dimensions: physical health and symptoms, health concerns and mental distress, social and intimate relationships, and treatment-related impact. Strong MI was demonstrated across sex and age (decrease in CFI indicator model indicated that HRQL correlated as expected with sex, age, and the ART status. Correlations of HRQL, symptom reports, and SF-12 scores evidenced convergent validity criterion. The simplified factor structure and scoring scheme for PROQOL-HIV will allow clinicians to monitor with greater reliability the HRQL of patients in clinical care and research settings. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Narrow-band imaging provides reliable screening for esophageal malignancy in patients with head and neck cancers.

    Science.gov (United States)

    Takenaka, Ryuta; Kawahara, Yoshiro; Okada, Hiroyuki; Hori, Keisuke; Inoue, Masafumi; Kawano, Seiji; Tanioka, Daisuke; Tsuzuki, Takao; Uemura, Masayuki; Ohara, Nobuya; Tominaga, Susumu; Onoda, Tomoo; Yamamoto, Kazuhide

    2009-12-01

    The narrow-band imaging (NBI) system is a novel technology that enhances the visualization of microvasculature and mucosal patterns. The aim of this study was to assess the reliability of the NBI system for esophageal cancer screening in patients with head and neck cancers. A total of 142 patients with head and neck squamous cell carcinoma (SCC) were examined by NBI endoscopy, followed by Lugol chromoendoscopy between April 2006 and June 2008 at the Okayama University Hospital, Okayama, Japan. Detection of SCC and high-grade intraepithelial neoplasia (HGIN) was conducted. The median age of the patients was 64 years (range: 29-86 years), and approximately three-fourths of all the patients were male. In total, 21 superficial lesions in 16 patients were detected by NBI endoscopy. Of these, 4 lesions were diagnosed histologically as SCC and 11 lesions as HGIN. An additional 22 Lugol-voiding lesions >or=5 mm were detected in 19 patients by Lugol chromoendoscopy. Although 1 of these lesions was diagnosed as HGIN, 21 lesions were diagnosed as low-grade intraepithelial neoplasia or lesions without atypical findings. The sensitivity of NBI endoscopy for detecting esophageal SCC and HGIN was 90.9% (95% confidence interval (CI), 58.7-99.8), specificity was 95.4% (95% CI, 90.3-98.3), and accuracy was 95.1% (95% CI, 90.1-98.0). NBI seems to be useful and reliable for screening for esophageal SCC in patients with head and neck cancers.

  17. Minimum sampling effort for reliable non-invasive estimations of excretion abundance of Elaphostrongylus cervi L1 in red deer (Cervus elaphus) populations.

    Science.gov (United States)

    Tayce, J; Acevedo, P; Vicente, J; Gortazar, C

    2008-09-01

    Faecal surveys are commonly used as non-invasive means to evaluate population abundance of animals as well as comparable indexes of prevalence and intensity of diseases between populations, especially macroparasites. While faecal surveys are among one of the simplest means to perform these evaluations, they are time consuming and labour intensive. The present study evaluated 80 red deer (Cervus elaphus) faecal samples collected in two study sites for the presence and abundance of first-stage larvae of the nematode Elaphostrongylus cervi and established pools of samples for epidemiological analysis. The analysis of 20-30 individual samples would produce a reliable estimate of the 'true' prevalence, and the error of the smaller sample size only doubled that of the 80 reference samples. The analysis of 5 pools of 5 pellet groups each, or of 4 pools of 10 pellet groups each, provided a reliable estimation of the 'true' excretion abundance estimated by the analysis of all 80 samples. These pools comprise a cost-effective and dependable alternative for measuring and obtaining comparable abundances of parasitic faecal excretion stages in red deer populations, which is very valuable for epidemiological and ecological research as well as for management purposes.

  18. Recent advances of VADASE to enhance reliability and accuracy of real-time displacements estimation

    Science.gov (United States)

    Savastano, Giorgio; Fratarcangeli, Francesca; Chiara D'Achille, Maria; Mazzoni, Augusto; Crespi, Mattia

    2017-04-01

    VADASE (Variometric Approach for Displacements Analysis Stand-alone Engine) is a relatively new processing approach (2011), able to estimate in real-time velocities and displacements in a global reference frame (ITRF), using high-rate (1 Hz or more) carrier phase observations and broadcast products (orbits, clocks) collected by a stand-alone GNSS, achieving an accuracy within 1-2 centimetres (usually better) over intervals up to few minutes. VADASE was originally developed within GNSS Seismology, but it was conveniently applied also to structural monitoring. It is well known from the very beginning that VADASE displacements might be impacted by two different effects: spurious spikes in the velocities due to outliers (in this case, displacements, obtained through velocities integration, are severely corrupted), and trends in the displacements (mainly due to broadcast orbit and clock errors). Moreover, for applications to earthquakes (seismic inversion), it is quite useful to estimate in real-time the so-called coseismic displacement. In fact, this displacement could be in theory estimated also in post-processing mode, using GNSS data collected over suitable long intervals before and after the earthquake; anyway, in case of strong earthquakes (for which VADASE can give significant contributions even quite close to the epicentre, since GNSS does not clip) a significant number of strong replicas usually follow the main shock in a short time, so that it may be (very) difficult to select the mentioned long data intervals. These three issues (outliers in velocity, trends in displacements and real-time coseismic displacements) were addressed in recent advances of VADASE. Two strategies were introduced, respectively based on Leave-One-Out Cross Validation (VADASE-LOO) for a
receiver autonomous outliers detection, and on a network augmentation strategy to filter common trend out (A-VADASE); they can be combined (1st VADASE-LOO, 2nd A-VADASE) for a complete solution

  19. Do Choice Experiments Generate Reliable Willingness to Pay Estimates Theory and Experimental Evidence

    Science.gov (United States)

    2015-01-01

    paper we set up a three-stage experimental and theoretical framework to investigate strategic behaviour and design induced status quo bias in choice...enable researchers to estimate the potential economic benefits from the good or program being valued as well as consumers ’ willingness to pay (WTP) for...various attributes of the good or program. Initially applied to problems in marketing (Louviere and Woodworth 1983), the methodology has been

  20. Comparative reliability analysis of publicly available software packages for automatic intracranial volume estimation.

    Science.gov (United States)

    Sargolzaei, S; Goryawala, M; Cabrerizo, M; Chen, G; Jayakar, P; Duara, R; Barker, W; Adjouadi, M

    2014-01-01

    Intracranial volume is an important measure in brain research often used as a correction factor in inter subject studies. The current study investigates the resulting outcome in terms of the type of software used for automatically estimating ICV measure. Five groups of 70 subjects are considered, including adult controls (AC) (n=11), adult with dementia (AD) (n=11), pediatric controls (PC) (n=18) and two groups of pediatric epilepsy subjects (PE1.5 and PE3) (n=30) using 1.5 T and 3T scanners, respectively. Reference measurements were calculated for each subject by manually tracing intracranial cavity without sub-sampling. Four publicly available software packages (AFNI, Freesurfer, FSL, and SPM) were examined in their ability to automatically estimate ICV across the five groups. Linear regression analyses suggest that reference measurement discrepancy could be explained best by SPM [R(2)= 0.67;p software for ICV estimation is dependent on the population under consideration and whether the software used is atlas-based or not.

  1. Compensated Transfer Entropy as a Tool for Reliably Estimating Information Transfer in Physiological Time Series

    Directory of Open Access Journals (Sweden)

    Alberto Porta

    2013-01-01

    Full Text Available We present a framework for the estimation of transfer entropy (TE under the conditions typical of physiological system analysis, featuring short multivariate time series and the presence of instantaneous causality (IC. The framework is based on recognizing that TE can be interpreted as the difference between two conditional entropy (CE terms, and builds on an efficient CE estimator that compensates for the bias occurring for high dimensional conditioning vectors and follows a sequential embedding procedure whereby the conditioning vectors are formed progressively according to a criterion for CE minimization. The issue of IC is faced accounting for zero-lag interactions according to two alternative empirical strategies: if IC is deemed as physiologically meaningful, zero-lag effects are assimilated to lagged effects to make them causally relevant; if not, zero-lag effects are incorporated in both CE terms to obtain a compensation. The resulting compensated TE (cTE estimator is tested on simulated time series, showing that its utilization improves sensitivity (from 61% to 96% and specificity (from 5/6 to 0/6 false positives in the detection of information transfer respectively when instantaneous effect are causally meaningful and non-meaningful. Then, it is evaluated on examples of cardiovascular and neurological time series, supporting the feasibility of the proposed framework for the investigation of physiological mechanisms.

  2. Derelict Fishing Line Provides a Useful Proxy for Estimating Levels of Non-Compliance with No-Take Marine Reserves

    OpenAIRE

    Williamson, David H.; Ceccarelli, Daniela M.; Evans, Richard D.; Hill, Jos K.; Russ, Garry R.

    2014-01-01

    No-take marine reserves (NTMRs) are increasingly being established to conserve or restore biodiversity and to enhance the sustainability of fisheries. Although effectively designed and protected NTMR networks can yield conservation and fishery benefits, reserve effects often fail to manifest in systems where there are high levels of non-compliance by fishers (poaching). Obtaining reliable estimates of NTMR non-compliance can be expensive and logistically challenging, particularly in areas wit...

  3. Data Transformation for Confidence Interval Improvement: An Application to the Estimation of Stress-Strength Model Reliability

    Directory of Open Access Journals (Sweden)

    Alessandro Barbiero

    2014-01-01

    Full Text Available In many statistical applications, it is often necessary to obtain an interval estimate for an unknown proportion or probability or, more generally, for a parameter whose natural space is the unit interval. The customary approximate two-sided confidence interval for such a parameter, based on some version of the central limit theorem, is known to be unsatisfactory when its true value is close to zero or one or when the sample size is small. A possible way to tackle this issue is the transformation of the data through a proper function that is able to make the approximation to the normal distribution less coarse. In this paper, we study the application of several of these transformations to the context of the estimation of the reliability parameter for stress-strength models, with a special focus on Poisson distribution. From this work, some practical hints emerge on which transformation may more efficiently improve standard confidence intervals in which scenarios.

  4. Heterogeneity of inter-rater reliabilities of grant peer reviews and its determinants: a general estimating equations approach.

    Directory of Open Access Journals (Sweden)

    Rüdiger Mutz

    Full Text Available BACKGROUND: One of the most important weaknesses of the peer review process is that different reviewers' ratings of the same grant proposal typically differ. Studies on the inter-rater reliability of peer reviews mostly report only average values across all submitted proposals. But inter-rater reliabilities can vary depending on the scientific discipline or the requested grant sum, for instance. GOAL: Taking the Austrian Science Fund (FWF as an example, we aimed to investigate empirically the heterogeneity of inter-rater reliabilities (intraclass correlation and its determinants. METHODS: The data consisted of N = 8,329 proposals with N = 23,414 overall ratings by reviewers, which were statistically analyzed using the generalized estimating equations approach (GEE. RESULTS: We found an overall intraclass correlation (ICC of reviewer? ratings of ρ = .259 with a 95% confidence interval of [.249,.279]. In humanities the ICCs were statistically significantly higher than in all other research areas except technical sciences. The ICC in biosciences deviated statistically significantly from the average ICC. Other factors (besides the research areas, such as the grant sum requested, had negligible influence on the ICC. CONCLUSIONS: Especially in biosciences, the number of reviewers of each proposal should be increased so as to increase the ICC.

  5. Heterogeneity of inter-rater reliabilities of grant peer reviews and its determinants: a general estimating equations approach.

    Science.gov (United States)

    Mutz, Rüdiger; Bornmann, Lutz; Daniel, Hans-Dieter

    2012-01-01

    One of the most important weaknesses of the peer review process is that different reviewers' ratings of the same grant proposal typically differ. Studies on the inter-rater reliability of peer reviews mostly report only average values across all submitted proposals. But inter-rater reliabilities can vary depending on the scientific discipline or the requested grant sum, for instance. Taking the Austrian Science Fund (FWF) as an example, we aimed to investigate empirically the heterogeneity of inter-rater reliabilities (intraclass correlation) and its determinants. The data consisted of N = 8,329 proposals with N = 23,414 overall ratings by reviewers, which were statistically analyzed using the generalized estimating equations approach (GEE). We found an overall intraclass correlation (ICC) of reviewer? ratings of ρ = .259 with a 95% confidence interval of [.249,.279]. In humanities the ICCs were statistically significantly higher than in all other research areas except technical sciences. The ICC in biosciences deviated statistically significantly from the average ICC. Other factors (besides the research areas), such as the grant sum requested, had negligible influence on the ICC. Especially in biosciences, the number of reviewers of each proposal should be increased so as to increase the ICC.

  6. Model-Based Heterogeneous Data Fusion for Reliable Force Estimation in Dynamic Structures under Uncertainties.

    Science.gov (United States)

    Khodabandeloo, Babak; Melvin, Dyan; Jo, Hongki

    2017-11-17

    Direct measurements of external forces acting on a structure are infeasible in many cases. The Augmented Kalman Filter (AKF) has several attractive features that can be utilized to solve the inverse problem of identifying applied forces, as it requires the dynamic model and the measured responses of structure at only a few locations. But, the AKF intrinsically suffers from numerical instabilities when accelerations, which are the most common response measurements in structural dynamics, are the only measured responses. Although displacement measurements can be used to overcome the instability issue, the absolute displacement measurements are challenging and expensive for full-scale dynamic structures. In this paper, a reliable model-based data fusion approach to reconstruct dynamic forces applied to structures using heterogeneous structural measurements (i.e., strains and accelerations) in combination with AKF is investigated. The way of incorporating multi-sensor measurements in the AKF is formulated. Then the formulation is implemented and validated through numerical examples considering possible uncertainties in numerical modeling and sensor measurement. A planar truss example was chosen to clearly explain the formulation, while the method and formulation are applicable to other structures as well.

  7. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    Science.gov (United States)

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  8. Estimation of immunization providers' activities cost, medication cost, and immunization dose errors cost in Iraq.

    Science.gov (United States)

    Al-lela, Omer Qutaiba B; Bahari, Mohd Baidi; Al-abbassi, Mustafa G; Salih, Muhannad R M; Basher, Amena Y

    2012-06-06

    The immunization status of children is improved by interventions that increase community demand for compulsory and non-compulsory vaccines, one of the most important interventions related to immunization providers. The aim of this study is to evaluate the activities of immunization providers in terms of activities time and cost, to calculate the immunization doses cost, and to determine the immunization dose errors cost. Time-motion and cost analysis study design was used. Five public health clinics in Mosul-Iraq participated in the study. Fifty (50) vaccine doses were required to estimate activities time and cost. Micro-costing method was used; time and cost data were collected for each immunization-related activity performed by the clinic staff. A stopwatch was used to measure the duration of activity interactions between the parents and clinic staff. The immunization service cost was calculated by multiplying the average salary/min by activity time per minute. 528 immunization cards of Iraqi children were scanned to determine the number and the cost of immunization doses errors (extraimmunization doses and invalid doses). The average time for child registration was 6.7 min per each immunization dose, and the physician spent more than 10 min per dose. Nurses needed more than 5 min to complete child vaccination. The total cost of immunization activities was 1.67 US$ per each immunization dose. Measles vaccine (fifth dose) has a lower price (0.42 US$) than all other immunization doses. The cost of a total of 288 invalid doses was 744.55 US$ and the cost of a total of 195 extra immunization doses was 503.85 US$. The time spent on physicians' activities was longer than that spent on registrars' and nurses' activities. Physician total cost was higher than registrar cost and nurse cost. The total immunization cost will increase by about 13.3% owing to dose errors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. MEMS reliability

    CERN Document Server

    Hartzell, Allyson L; Shea, Herbert R

    2010-01-01

    This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.

  10. Online Reliable Peak Charge/Discharge Power Estimation of Series-Connected Lithium-Ion Battery Packs

    Directory of Open Access Journals (Sweden)

    Bo Jiang

    2017-03-01

    Full Text Available The accurate peak power estimation of a battery pack is essential to the power-train control of electric vehicles (EVs. It helps to evaluate the maximum charge and discharge capability of the battery system, and thus to optimally control the power-train system to meet the requirement of acceleration, gradient climbing and regenerative braking while achieving a high energy efficiency. A novel online peak power estimation method for series-connected lithium-ion battery packs is proposed, which considers the influence of cell difference on the peak power of the battery packs. A new parameter identification algorithm based on adaptive ratio vectors is designed to online identify the parameters of each individual cell in a series-connected battery pack. The ratio vectors reflecting cell difference are deduced strictly based on the analysis of battery characteristics. Based on the online parameter identification, the peak power estimation considering cell difference is further developed. Some validation experiments in different battery aging conditions and with different current profiles have been implemented to verify the proposed method. The results indicate that the ratio vector-based identification algorithm can achieve the same accuracy as the repetitive RLS (recursive least squares based identification while evidently reducing the computation cost, and the proposed peak power estimation method is more effective and reliable for series-connected battery packs due to the consideration of cell difference.

  11. Validity and reliability of the minimum basic data set in estimating nosocomial acute gastroenteritis caused by rotavirus

    Directory of Open Access Journals (Sweden)

    Olga Redondo-González

    2015-03-01

    Full Text Available Introduction: Rotavirus is the principal cause of nosocomial acute gastroenteritis (NAGE under 5 years of age. The objective is to evaluate the validity and reliability of the minimum basic data set (MBDS in estimating the NAGE caused by rotavirus (NAGER and to analyze any changes during the three years that the Rotarix® and Rotateq® vaccines were used in Spain. Material and methods: A descriptive, retrospective study was carried out in the University Hospital of Guadalajara (UHG (Spain between 2003-2009 using the MBDS, positive microbiological results for rotavirus (PMRs, and medical histories. Three methods of estimation were used: 1 An ICD-9-CM code 008.61 in the secondary diagnosis fields (DIAG2 of MBDS; 2 method 1 and/or PMRs with a current or recent hospitalization; and 3 the reference method or method 2 contrasted with patient medical histories. The validity of methods 1 and 2 was determined -sensitivity, specificity, predictive values and likelihood ratios (LRs-, along with their agreement with method 3 (Kappa coefficient. In addition, the incidence rate ratio between the NAGER rate in 2007-2009 (commercialization period of both vaccines was calculated with respect to 2003-2005 (pre-commercialization period. Results: Method 1 identified 65 records with a DIAG2 of 008.61. Method 2 found 62 probable cases, and the reference method, 49 true cases. The sensitivity of the MBDS was 67 %, the positive predictive value was 51 %, and both negative LR (LR- and reliability were moderate (LR- 0.33, Kappa coefficient 0.58. During 2007-2009, the NARGE decreased by 5 cases per 10³ hospitalizations by 9 per 10(4 days of hospitalization. Method 2 overestimated both the decline in incidence by 2 per 10³ hospitalizations and the decreased risk per day of stay by 10 %. The MBDS found no differences between the two three-year periods, but, like method 2, showed an excellent level of diagnostic evidence (LR+ 67. Conclusion: The MBDS taken together with

  12. CO{sub 2}-recycling by plants: how reliable is the carbon isotope estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Siegwolf, R.T.W.; Saurer, M. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Koerner, C. [Basel Univ., Basel (Switzerland)

    1997-06-01

    In the study of plant carbon relations, the amount of the respiratory losses from the soil was estimated, determining the gradient of the stable isotope {sup 13}C with increasing plant canopy height. According to the literature 8-26% of the CO{sub 2} released in the forests by soil and plant respiratory processes are reassimilated (recycled) by photosynthesis during the day. Our own measurements however, which we conducted in grass land showed diverging results from no indicating of carbon recycling, to a considerable {delta}{sup 13}C gradient suggesting a high carbon recycling rate. The role of other factors, such as air humidity and irradiation which influence the {delta}{sup 13}C in a canopy as well, are discussed. (author) 3 figs., 4 refs.

  13. Report: Eleven Years After Agreement, EPA Has Not Developed Reliable Emission Estimation Methods to Determine Whether Animal Feeding Operations Comply With Clean Air Act and Other Statutes

    Science.gov (United States)

    Report #17-P-0396, September 19, 2017. Until the EPA develops sound methods to estimate emissions, the agency cannot reliably determine whether animal feeding operations comply with applicable Clean Air Act requirements.

  14. 49 CFR 375.405 - How must I provide a non-binding estimate?

    Science.gov (United States)

    2010-10-01

    ... TRANSPORTATION OF HOUSEHOLD GOODS IN INTERSTATE COMMERCE; CONSUMER PROTECTION REGULATIONS Estimating Charges...-binding estimate as an attachment to be made an integral part of the bill of lading contract. (5) You must... services are necessary to properly service a shipment after the bill of lading has been issued, you must...

  15. 49 CFR 375.403 - How must I provide a binding estimate?

    Science.gov (United States)

    2010-10-01

    ... TRANSPORTATION OF HOUSEHOLD GOODS IN INTERSTATE COMMERCE; CONSUMER PROTECTION REGULATIONS Estimating Charges... integral part of the bill of lading contract. (4) You must clearly indicate upon each binding estimate's... service a shipment after the bill of lading has been issued, you must inform the individual shipper what...

  16. Bayesian Regression and Neuro-Fuzzy Methods Reliability Assessment for Estimating Streamflow

    Directory of Open Access Journals (Sweden)

    Yaseen A. Hamaamin

    2016-07-01

    Full Text Available Accurate and efficient estimation of streamflow in a watershed’s tributaries is prerequisite parameter for viable water resources management. This study couples process-driven and data-driven methods of streamflow forecasting as a more efficient and cost-effective approach to water resources planning and management. Two data-driven methods, Bayesian regression and adaptive neuro-fuzzy inference system (ANFIS, were tested separately as a faster alternative to a calibrated and validated Soil and Water Assessment Tool (SWAT model to predict streamflow in the Saginaw River Watershed of Michigan. For the data-driven modeling process, four structures were assumed and tested: general, temporal, spatial, and spatiotemporal. Results showed that both Bayesian regression and ANFIS can replicate global (watershed and local (subbasin results similar to a calibrated SWAT model. At the global level, Bayesian regression and ANFIS model performance were satisfactory based on Nash-Sutcliffe efficiencies of 0.99 and 0.97, respectively. At the subbasin level, Bayesian regression and ANFIS models were satisfactory for 155 and 151 subbasins out of 155 subbasins, respectively. Overall, the most accurate method was a spatiotemporal Bayesian regression model that outperformed other models at global and local scales. However, all ANFIS models performed satisfactory at both scales.

  17. A practical approach for calculating reliable cost estimates from observational data: application to cost analyses in maternal and child health.

    Science.gov (United States)

    Salemi, Jason L; Comins, Meg M; Chandler, Kristen; Mogos, Mulubrhan F; Salihu, Hamisu M

    2013-08-01

    Comparative effectiveness research (CER) and cost-effectiveness analysis are valuable tools for informing health policy and clinical care decisions. Despite the increased availability of rich observational databases with economic measures, few researchers have the skills needed to conduct valid and reliable cost analyses for CER. The objectives of this paper are to (i) describe a practical approach for calculating cost estimates from hospital charges in discharge data using publicly available hospital cost reports, and (ii) assess the impact of using different methods for cost estimation in maternal and child health (MCH) studies by conducting economic analyses on gestational diabetes (GDM) and pre-pregnancy overweight/obesity. In Florida, we have constructed a clinically enhanced, longitudinal, encounter-level MCH database covering over 2.3 million infants (and their mothers) born alive from 1998 to 2009. Using this as a template, we describe a detailed methodology to use publicly available data to calculate hospital-wide and department-specific cost-to-charge ratios (CCRs), link them to the master database, and convert reported hospital charges to refined cost estimates. We then conduct an economic analysis as a case study on women by GDM and pre-pregnancy body mass index (BMI) status to compare the impact of using different methods on cost estimation. Over 60 % of inpatient charges for birth hospitalizations came from the nursery/labor/delivery units, which have very different cost-to-charge markups (CCR = 0.70) than the commonly substituted hospital average (CCR = 0.29). Using estimated mean, per-person maternal hospitalization costs for women with GDM as an example, unadjusted charges ($US14,696) grossly overestimated actual cost, compared with hospital-wide ($US3,498) and department-level ($US4,986) CCR adjustments. However, the refined cost estimation method, although more accurate, did not alter our conclusions that infant/maternal hospitalization costs

  18. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    2017-07-15

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.

  19. Determination of conditions for reliability of travel parameters estimation in a network using “floating” cars

    Science.gov (United States)

    Kocherga, Victor; Topilin, Ivan; Volodina, Marina

    2017-10-01

    Based on the analysis of the experimental data the article reveals that if in the area under study a travel length is more than 7 km (about 20 minutes of movement), it becomes possible to obtain stable relationships between the specific parameters of traffic in the network. A further increase in the duration of observations does not cause a change in the trip parameters and the value of the correlation ratio for the model. The corresponding results are achieved with a constant cruising of five “floating” cars in the area under study. Moreover, it is proved that reliable results of estimating the traffic parameters on the network are obtained both for fixed traffic routes of “floating” cars and for randomly choosing routes.

  20. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Science.gov (United States)

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  1. Intraclass reliability for assessing how well Taiwan constrained hospital-provided medical services using statistical process control chart techniques

    Directory of Open Access Journals (Sweden)

    Chien Tsair-Wei

    2012-05-01

    Full Text Available Abstract Background Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC to assess how well Taiwan constrained hospital-provided medical services in such a system. Methods A custom Excel-VBA routine to record the distances of standard deviations (SDs from the central line (the mean over the previous 12 months of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan’s year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. Results ICCs were generated for Taiwan’s year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. Conclusion We recommend using the ICC to annually assess a nation’s year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system.

  2. Protein Based Molecular Markers Provide Reliable Means to Understand Prokaryotic Phylogeny and Support Darwinian Mode of Evolution

    Directory of Open Access Journals (Sweden)

    Vaibhav eBhandari

    2012-07-01

    Full Text Available The analyses of genome sequences have led to the proposal that lateral gene transfers (LGTs among prokaryotes are so widespread that they disguise the interrelationships among these organisms. This has led to questioning whether the Darwinian model of evolution is applicable to the prokaryotic organisms. In this review, we discuss the usefulness of taxon-specific molecular markers such as conserved signature indels (CSIs and conserved signature proteins (CSPs for understanding the evolutionary relationships among prokaryotes and to assess the influence of LGTs on prokaryotic evolution. The analyses of genomic sequences have identified large numbers of CSIs and CSPs that are unique properties of different groups of prokaryotes ranging from phylum to genus levels. The species distribution patterns of these molecular signatures strongly support a tree-like vertical inheritance of the genes containing these molecular signatures that is consistent with phylogenetic trees. Recent detailed studies in this regard on Thermotogae and Archaea, which are reviewed here, have identified large numbers of CSIs and CSPs that are specific for the species from these two taxa and a number of their major clades. The genetic changes responsible for these CSIs (and CSPs initially likely occurred in the common ancestors of these taxa and then vertically transferred to various descendants. Although some CSIs and CSPs in unrelated groups of prokaryotes were identified, their small numbers and random occurrence has no apparent influence on the consistent tree-like branching pattern emerging from other markers. These results provide evidence that although LGT is an important evolutionary force, it does not mask the tree-like branching pattern of prokaryotes or understanding of their evolutionary relationships. The identified CSIs and CSPs also provide novel and highly specific means for identification of different groups of microbes and for taxonomical and biochemical

  3. Intraclass reliability for assessing how well Taiwan constrained hospital-provided medical services using statistical process control chart techniques.

    Science.gov (United States)

    Chien, Tsair-Wei; Chou, Ming-Ting; Wang, Wen-Chung; Tsai, Li-Shu; Lin, Weir-Sen

    2012-05-15

    Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan's year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. ICCs were generated for Taiwan's year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. We recommend using the ICC to annually assess a nation's year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system.

  4. Trunk-acceleration based assessment of gait parameters in older persons : A comparison of reliability and validity of four inverted pendulum based estimations

    NARCIS (Netherlands)

    Zijlstra, Agnes; Zijlstra, Wiebren

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability

  5. Bioelectrical impedance analysis to estimate body composition in children and adolescents: a systematic review and evidence appraisal of validity, responsiveness, reliability and measurement error

    NARCIS (Netherlands)

    Talma, H.; Chinapaw, M.J.M.; Bakker, B.; Hirasing, R.A.; Terwee, C.B.; Altenburg, T.M.

    2013-01-01

    Bioelectrical impedance analysis (BIA) is a practical method to estimate percentage body fat (%BF). In this systematic review, we aimed to assess validity, responsiveness, reliability and measurement error of BIA methods in estimating %BF in children and adolescents.We searched for relevant studies

  6. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    Science.gov (United States)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  7. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  8. Ankle joint range of motion measurements in spastic cerebral palsy children: intraobserver and interobserver reliability and reproducibility of goniometry and visual estimation.

    Science.gov (United States)

    Allington, Nanni J; Leroy, Nathalie; Doneux, Carole

    2002-07-01

    The aim of this study was to assess the intra- and interobserver reliability and reproducibility of goniometry and visual estimation of ankle joint range of motion measurements in children with spastic cerebral palsy. Forty-six ankles of 24 spastic cerebral palsy children were measured under a strict protocol. The global mean measurement error was 5 degrees (SD, 5 degrees) for intra- and interobserver measurements and 3 degrees (SD, 3 degrees) for goniometry versus visual estimation. Statistical analysis showed a high reliability for intra- and interobserver measurements (r>0.75), between visual estimation and goniometry (correlation coefficient, r>0.967; concordance coefficient, r>0.957). Both visual estimation and goniometry ankle range-of-motion measurements are reliable and reproducible in spastic cerebral palsy children if a strict but simple protocol is applied.

  9. How reliable are satellite precipitation estimates for driving hydrological models: a verification study over the Mediterranean area

    Science.gov (United States)

    Camici, Stefania; Ciabatta, Luca; Massari, Christian; Brocca, Luca

    2017-04-01

    , TMPA 3B42-RT, CMORPH, PERSIANN and a new soil moisture-derived rainfall datasets obtained through the application of SM2RAIN algorithm (Brocca et al., 2014) to ASCAT (Advanced SCATterometer) soil moisture product are used in the analysis. The performances obtained with SRPs are compared with those obtained by using ground data during the 6-year period from 2010 to 2015. In addition, the performance obtained by an integration of the above mentioned SRPs is also investigated to see whether merged rainfall observations are able to improve flood simulation. Preliminary analysis were also carried out by using the IMERG early run product of GPM mission. The results highlight that SRPs should be used with caution for rainfall-runoff modelling in the Mediterranean region. Bias correction and model recalibration are necessary steps, even though not always sufficient to achieve satisfactory performances. Indeed, some of the products provide unreliable outcomes, mainly in smaller basins (Journal of Geophysical Research, 119(9), 5128-5141, doi:10.1002/2014JD021489. Masseroni, D., Cislaghi, A., Camici, S., Massari, C., Brocca, L. (2017). A reliable rainfall-runoff model for flood forecasting: review and application to a semiurbanized watershed at high flood risk in Italy. Hydrology Research, in press, doi:10.2166/nh.2016.037.

  10. Assessing the impact of uncertainty on flood risk estimates with reliability analysis using 1-D and 2-D hydraulic models

    Directory of Open Access Journals (Sweden)

    L. Altarejos-García

    2012-07-01

    Full Text Available This paper addresses the use of reliability techniques such as Rosenblueth's Point-Estimate Method (PEM as a practical alternative to more precise Monte Carlo approaches to get estimates of the mean and variance of uncertain flood parameters water depth and velocity. These parameters define the flood severity, which is a concept used for decision-making in the context of flood risk assessment. The method proposed is particularly useful when the degree of complexity of the hydraulic models makes Monte Carlo inapplicable in terms of computing time, but when a measure of the variability of these parameters is still needed. The capacity of PEM, which is a special case of numerical quadrature based on orthogonal polynomials, to evaluate the first two moments of performance functions such as the water depth and velocity is demonstrated in the case of a single river reach using a 1-D HEC-RAS model. It is shown that in some cases, using a simple variable transformation, statistical distributions of both water depth and velocity approximate the lognormal. As this distribution is fully defined by its mean and variance, PEM can be used to define the full probability distribution function of these flood parameters and so allowing for probability estimations of flood severity. Then, an application of the method to the same river reach using a 2-D Shallow Water Equations (SWE model is performed. Flood maps of mean and standard deviation of water depth and velocity are obtained, and uncertainty in the extension of flooded areas with different severity levels is assessed. It is recognized, though, that whenever application of Monte Carlo method is practically feasible, it is a preferred approach.

  11. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  12. A treatment schedule of conventional physical therapy provided to enhance upper limb sensorimotor recovery after stroke: expert criterion validity and intra-rater reliability.

    Science.gov (United States)

    Donaldson, Catherine; Tallis, Raymond C; Pomeroy, Valerie M

    2009-06-01

    Inadequate description of treatment hampers progress in stroke rehabilitation. To develop a valid, reliable, standardised treatment schedule of conventional physical therapy provided for the paretic upper limb after stroke. Eleven neurophysiotherapists participated in the established methodology: semi-structured interviews, focus groups and piloting a draft treatment schedule in clinical practice. Different physiotherapists (n=13) used the treatment schedule to record treatment given to stroke patients with mild, moderate and severe upper limb paresis. Rating of adequacy of the treatment schedule was made using a visual analogue scale (0 to 100mm). Mean (95% confidence interval) visual analogue scores were calculated (expert criterion validity). For intra-rater reliability, each physiotherapist observed a video tape of their treatment and immediately completed a treatment schedule recording form on two separate occasions, 4 to 6 weeks apart. The Kappa statistic was calculated for intra-rater reliability. The treatment schedule consists of a one-page A4 recording form and a user booklet, detailing 50 treatment activities. Expert criterion validity was 79 (95% confidence interval 74 to 84). Intra-rater Kappa was 0.81 (P<0.001). This treatment schedule can be used to document conventional physical therapy in subsequent clinical trials in the geographical area of its development. Further work is needed to investigate generalisability beyond this geographical area.

  13. Comparison of different glomerular filtration methods in the elderly: which formula provides better estimates?

    Science.gov (United States)

    Aras, Sevgi; Varli, Murat; Uzun, Burcu; Atli, Teslime; Keven, Kenan; Turgay, Murat

    2012-01-01

    Technetium-99m diethylenetriaminepentaacetic acid ((99m)Tc-DTPA) is an ideal radioisotopic method having a high correlation with inulin clearance for the determination of glomerular filtration rate (GFR). Different formulas like creatinine clearance (CrCl) in 24 h urine samples, Cockroft-Gault formula (CGF), and modification of diet in renal disease (MDRD) are being used to come up with an estimate. In this study, we compared (99m)Tc-DTPA with the formulas mentioned above in an attempt to best identify the method that would yield the nearly ideal GFR estimates in the elderly. In 76 patients who were admitted to our clinic, we measured 24 h urine volume (V), urine creatinine (Ucr), and serum creatinine (Scr) levels together with CrCl, Scr, serum urea (Su), and albumin (Alb) levels. By using coefficients identified for age, gender, and race, we calculated modification of diet in renal disease 1 (MDRD1). Different from MDRD1, we calculated modification of diet in renal disease 2 (MDRD2) that does not include Su and Alb parameters and formulas like CGF that include Scr, age, gender, and weight parameters to come up with GFR levels. All patients underwent (99m)Tc-DTPA procedure. The mean of the GFR values measured by (99m)Tc-DTPA was 54.3 ± 19.9. The means of GFR values calculated by CrCl, MDRD1, MDRD2, and CGF were 58.0 ± 30.5, 60.9 ± 22.1, 54.4 ± 20.1, and 57.9 ± 22.4, respectively. GFR as measured by (99m)Tc-DTPA showed statistically significant correlations with the results of other methods (p < 0.001 for all methods). The most significant correlation was with MDRD1. MDRD1 can be used for next to ideal and accurate predictions of GFR in the elderly in the daily practice.

  14. Social Networks and the Diffusion of Adolescent Problem Behavior: Reliable Estimates of Selection and Influence from Sixth Through Ninth Grades.

    Science.gov (United States)

    Osgood, D Wayne; Feinberg, Mark E; Ragan, Daniel T

    2015-08-01

    Seeking to reduce problematic peer influence is a prominent theme of programs to prevent adolescent problem behavior. To support the refinement of this aspect of prevention programming, we examined peer influence and selection processes for three problem behaviors (delinquency, alcohol use, and smoking). We assessed not only the overall strengths of these peer processes, but also their consistency versus variability across settings. We used dynamic stochastic actor-based models to analyze five waves of friendship network data across sixth through ninth grades for a large sample of U.S. adolescents. Our sample included two successive grade cohorts of youth in 26 school districts participating in the PROSPER study, yielding 51 longitudinal social networks based on respondents' friendship nominations. For all three self-reported antisocial behaviors, we found evidence of both peer influence and selection processes tied to antisocial behavior. There was little reliable variance in these processes across the networks, suggesting that the statistical imprecision of the peer influence and selection estimates in previous studies likely accounts for inconsistencies in results. Adolescent friendship networks play a strong role in shaping problem behavior, but problem behaviors also inform friendship choices. In addition to preferring friends with similar levels of problem behavior, adolescents tend to choose friends who engage in problem behaviors, thus creating broader diffusion.

  15. A case study review of technical and technology issues for transition of a utility load management program to provide system reliability resources in restructured electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Weller, G.H.

    2001-07-15

    Utility load management programs--including direct load control and interruptible load programs--were employed by utilities in the past as system reliability resources. With electricity industry restructuring, the context for these programs has changed; the market that was once controlled by vertically integrated utilities has become competitive, raising the question: can existing load management programs be modified so that they can effectively participate in competitive energy markets? In the short run, modified and/or improved operation of load management programs may be the most effective form of demand-side response available to the electricity system today. However, in light of recent technological advances in metering, communication, and load control, utility load management programs must be carefully reviewed in order to determine appropriate investments to support this transition. This report investigates the feasibility of and options for modifying an existing utility load management system so that it might provide reliability services (i.e. ancillary services) in the competitive markets that have resulted from electricity industry restructuring. The report is a case study of Southern California Edison's (SCE) load management programs. SCE was chosen because it operates one of the largest load management programs in the country and it operates them within a competitive wholesale electricity market. The report describes a wide range of existing and soon-to-be-available communication, control, and metering technologies that could be used to facilitate the evolution of SCE's load management programs and systems to provision of reliability services. The fundamental finding of this report is that, with modifications, SCE's load management infrastructure could be transitioned to provide critical ancillary services in competitive electricity markets, employing currently or soon-to-be available load control technologies.

  16. Estimated Nutritive Value of Low-Price Model Lunch Sets Provided to Garment Workers in Cambodia.

    Science.gov (United States)

    Makurat, Jan; Pillai, Aarati; Wieringa, Frank T; Chamnan, Chhoun; Krawinkel, Michael B

    2017-07-21

    The establishment of staff canteens is expected to improve the nutritional situation of Cambodian garment workers. The objective of this study is to assess the nutritive value of low-price model lunch sets provided at a garment factory in Phnom Penh, Cambodia. Exemplary lunch sets were served to female workers through a temporary canteen at a garment factory in Phnom Penh. Dish samples were collected repeatedly to examine mean serving sizes of individual ingredients. Food composition tables and NutriSurvey software were used to assess mean amounts and contributions to recommended dietary allowances (RDAs) or adequate intake of energy, macronutrients, dietary fiber, vitamin C (VitC), iron, vitamin A (VitA), folate and vitamin B12 (VitB12). On average, lunch sets provided roughly one third of RDA or adequate intake of energy, carbohydrates, fat and dietary fiber. Contribution to RDA of protein was high (46% RDA). The sets contained a high mean share of VitC (159% RDA), VitA (66% RDA), and folate (44% RDA), but were low in VitB12 (29% RDA) and iron (20% RDA). Overall, lunches satisfied recommendations of caloric content and macronutrient composition. Sets on average contained a beneficial amount of VitC, VitA and folate. Adjustments are needed for a higher iron content. Alternative iron-rich foods are expected to be better suited, compared to increasing portions of costly meat/fish components. Lunch provision at Cambodian garment factories holds the potential to improve food security of workers, approximately at costs of <1 USD/person/day at large scale. Data on quantitative total dietary intake as well as physical activity among workers are needed to further optimize the concept of staff canteens.

  17. Estimated Nutritive Value of Low-Price Model Lunch Sets Provided to Garment Workers in Cambodia

    Directory of Open Access Journals (Sweden)

    Jan Makurat

    2017-07-01

    Full Text Available Background: The establishment of staff canteens is expected to improve the nutritional situation of Cambodian garment workers. The objective of this study is to assess the nutritive value of low-price model lunch sets provided at a garment factory in Phnom Penh, Cambodia. Methods: Exemplary lunch sets were served to female workers through a temporary canteen at a garment factory in Phnom Penh. Dish samples were collected repeatedly to examine mean serving sizes of individual ingredients. Food composition tables and NutriSurvey software were used to assess mean amounts and contributions to recommended dietary allowances (RDAs or adequate intake of energy, macronutrients, dietary fiber, vitamin C (VitC, iron, vitamin A (VitA, folate and vitamin B12 (VitB12. Results: On average, lunch sets provided roughly one third of RDA or adequate intake of energy, carbohydrates, fat and dietary fiber. Contribution to RDA of protein was high (46% RDA. The sets contained a high mean share of VitC (159% RDA, VitA (66% RDA, and folate (44% RDA, but were low in VitB12 (29% RDA and iron (20% RDA. Conclusions: Overall, lunches satisfied recommendations of caloric content and macronutrient composition. Sets on average contained a beneficial amount of VitC, VitA and folate. Adjustments are needed for a higher iron content. Alternative iron-rich foods are expected to be better suited, compared to increasing portions of costly meat/fish components. Lunch provision at Cambodian garment factories holds the potential to improve food security of workers, approximately at costs of <1 USD/person/day at large scale. Data on quantitative total dietary intake as well as physical activity among workers are needed to further optimize the concept of staff canteens.

  18. Spectral resolution in conjunction with polar plots improves the accuracy and reliability of FLIM measurements and estimates of FRET efficiency.

    Science.gov (United States)

    Chen, Y-C; Clegg, R M

    2011-10-01

    A spectrograph with continuous wavelength resolution has been integrated into a frequency-domain fluorescence lifetime-resolved imaging microscope (FLIM). The spectral information assists in the separation of multiple lifetime components, and helps resolve signal cross-talking that can interfere with an accurate analysis of multiple lifetime processes. This extends the number of different dyes that can be measured simultaneously in a FLIM measurement. Spectrally resolved FLIM (spectral-FLIM) also provides a means to measure more accurately the lifetime of a dim fluorescence component (as low as 2% of the total intensity) in the presence of another fluorescence component with a much higher intensity. A more reliable separation of the donor and acceptor fluorescence signals are possible for Förster resonance energy transfer (FRET) measurements; this allows more accurate determinations of both donor and acceptor lifetimes. By combining the polar plot analysis with spectral-FLIM data, the spectral dispersion of the acceptor signal can be used to derive the donor lifetime - and thereby the FRET efficiency - without iterative fitting. The lifetime relation between the donor and acceptor, in conjunction with spectral dispersion, is also used to separate the FRET pair signals from the donor alone signal. This method can be applied further to quantify the signals from separate FRET pairs, and provide information on the dynamics of the FRET pair between different states. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  19. Improved power-law estimates from multiple samples provided by millennium climate simulations

    Science.gov (United States)

    Henriksson, S. V.; Räisänen, P.; Silen, J.; Järvinen, H.; Laaksonen, A.

    2015-02-01

    Using the long annual mean temperature time series provided by millennium Earth System Model simulations and a method of discrete Fourier transform with varying starting point and length of time window together with averaging, we get good fits to power laws between two characteristic oscillatory timescales of the model climate: multidecadal (50-80 years) and El Nino (3-6 years) timescales. For global mean temperature, we fit β ˜ 0.35 in a relation S( f) ˜ f - β in a simulation without external climate forcing and β over 0.7 in a simulation with external forcing included. The power law is found both with and without external forcing despite the forcings, e.g. the volcanic forcing, not showing similar behaviour, indicating a nonlinear temperature response to time-varying forcing. We also fit a power law with β ˜ 8 to the narrow frequency range between El Nino frequencies (up to 1/(3.2 years)) and the Nyquist frequency (1/(2 years)). Also, monthly mean temperature time series are considered and a decent power-law fit for frequencies above 1/year is obtained. Regional variability in best-fit β is explored, and the impact of choosing the frequency range on the result is illustrated. When all resolved frequencies are used, land areas seem to have lower βs than ocean areas on average, but when fits are restricted to frequencies below 1/(6 years), this difference disappears, while regional differences still remain. Results compare well with measurements both for global mean temperature and for the central England temperature record.

  20. Methods of Estimation the Reliability and Increasing the Informativeness of the Laboratory Results (Analysis of the Laboratory Case of Measurement the Indicators of Thyroid Function

    Directory of Open Access Journals (Sweden)

    N A Kovyazina

    2014-06-01

    Full Text Available The goal of the study was to demonstrate the multilevel laboratory quality management system and point at the methods of estimating the reliability and increasing the amount of information content of the laboratory results (on the example of the laboratory case. Results. The article examines the stages of laboratory quality management which has helped to estimate the reliability of the results of determining Free T3, Free T4 and TSH. The measurement results are presented by the expanded uncertainty and the evaluation of the dynamics. Conclusion. Compliance with mandatory measures for laboratory quality management system enables laboratories to obtain reliable results and calculate the parameters that are able to increase the amount of information content of laboratory tests in clinical decision making.

  1. An Empirical Demonstration of the Stability of the Maximized Correlation as an Internal-Consistency Reliability Estimate for Tests of Small Item Size.

    Science.gov (United States)

    Wagner, Edwin E.; And Others

    1990-01-01

    Maximized correlation as an internal reliability estimate for tests with few items was investigated. An actual sampling distribution of maximum correlation--"r" max--was empirically derived from 100 samples of 50 cases each from Rorschach test data and compared with those of alpha and an odd/even split, using 2,020 Rorschach protocols.…

  2. Reliability of the Core Items in the General Social Survey: Estimates from the Three-Wave Panels, 2006–2014

    Directory of Open Access Journals (Sweden)

    Michael Hout

    2016-11-01

    Full Text Available We used standard and multilevel models to assess the reliability of core items in the General Social Survey panel studies spanning 2006 to 2014. Most of the 293 core items scored well on the measure of reliability: 62 items (21 percent had reliability measures greater than 0.85; another 71 (24 percent had reliability measures between 0.70 and 0.85. Objective items, especially facts about demography and religion, were generally more reliable than subjective items. The economic recession of 2007–2009, the slow recovery afterward, and the election of Barack Obama in 2008 altered the social context in ways that may look like unreliability of items. For example, unemployment status, hours worked, and weeks worked have lower reliability than most work-related items, reflecting the consequences of the recession on the facts of peoples lives. Items regarding racial and gender discrimination and racial stereotypes scored as particularly unreliable, accounting for most of the 15 items with reliability coefficients less than 0.40. Our results allow scholars to more easily take measurement reliability into consideration in their own research, while also highlighting the limitations of these approaches.

  3. Can simple mobile phone applications provide reliable counts of respiratory rates in sick infants and children? An initial evaluation of three new applications.

    Science.gov (United States)

    Black, James; Gerdtz, Marie; Nicholson, Pat; Crellin, Dianne; Browning, Laura; Simpson, Julie; Bell, Lauren; Santamaria, Nick

    2015-05-01

    applications found. This study provides evidence that applications running on simple phones can be used to count respiratory rates in children. The Once-per-Breath methods are the most reliable, outperforming the 60-second count. For children with raised respiratory rates the 20-breath version of the Once-per-Breath method is faster, so it is a more suitable option where health workers are under time pressure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Conceptions of reliability revisited and practical recommendations.

    Science.gov (United States)

    Sijtsma, Klaas; van der Ark, L Andries

    2015-01-01

    We discuss reliability definitions from the perspectives of classical test theory, factor analysis, and generalizability theory. For each method, we discuss the rationale, the estimation of reliability, and the goodness of fit of the model that defines the reliability coefficient to the data. Similarities and differences in the three approaches are highlighted. Finally, we provide a computational example using generated data to illustrate the differences among the different reliability methods.

  5. Contemporary group estimates adjusted for climatic effects provide a finer definition of the unknown environmental challenges experienced by growing pigs.

    Science.gov (United States)

    Guy, S Z Y; Li, L; Thomson, P C; Hermesch, S

    2017-12-01

    Environmental descriptors derived from mean performances of contemporary groups (CGs) are assumed to capture any known and unknown environmental challenges. The objective of this paper was to obtain a finer definition of the unknown challenges, by adjusting CG estimates for the known climatic effects of monthly maximum air temperature (MaxT), minimum air temperature (MinT) and monthly rainfall (Rain). As the unknown component could include infection challenges, these refined descriptors may help to better model varying responses of sire progeny to environmental infection challenges for the definition of disease resilience. Data were recorded from 1999 to 2013 at a piggery in south-east Queensland, Australia (n = 31,230). Firstly, CG estimates of average daily gain (ADG) and backfat (BF) were adjusted for MaxT, MinT and Rain, which were fitted as splines. In the models used to derive CG estimates for ADG, MaxT and MinT were significant variables. The models that contained these significant climatic variables had CG estimates with a lower variance compared to models without significant climatic variables. Variance component estimates were similar across all models, suggesting that these significant climatic variables accounted for some known environmental variation captured in CG estimates. No climatic variables were significant in the models used to derive the CG estimates for BF. These CG estimates were used to categorize environments. There was no observable sire by environment interaction (Sire×E) for ADG when using the environmental descriptors based on CG estimates on BF. For the environmental descriptors based on CG estimates of ADG, there was significant Sire×E only when MinT was included in the model (p = .01). Therefore, this new definition of the environment, preadjusted by MinT, increased the ability to detect Sire×E. While the unknown challenges captured in refined CG estimates need verification for infection challenges, this may provide a

  6. Reliability and Validity of Kurdish Language Version of Health Promoting Lifestyle Profile II among Kurdish Healthcare Providers Kurdish Version of HPLP-II.

    Science.gov (United States)

    Kamali, Aram Salih Mohammed Amin; Sadeghi, Roya; Tol, Azar; Yaseri, Mahdi

    2016-12-01

    Unhealthy lifestyles pose significant threat to public health. This study aimed to assess the validity and reliability of a Kurdish version of the HPLP-II instrument among Kurdish healthcare providers, whose society and culture differ from that of North America and Spain, where the instrument was developed. The instrument was translated into Kurdish, back translated, and pilot tested to ascertain cultural sensitivity. It was then evaluated using a convenience sample of 460 healthcare providers in the Kurdistan region, northern Iraq using a principal components factor analysis. The order of factors was entirely identical to those isolated previously during the psychometric assessment of the English language version. The majority of our study participants were male (55%). In addition, 39.2% of participants were nurses. In addition, 42% of participants had less than five years of working experience. Furthermore, 82.1% of our study population held a high school diploma. The mean (SE) of Physical Activities dimension was low (15.3 ± 4.8) compared to Spiritual Growth dimension (24.5 ± 4.4). Moreover, the Cronbach's alpha coefficient for the overall HPLP-II questionnaire was 0.870; however, the nutrition dimension was low (0.622) compared to Physical Activities dimension (0.792). Furthermore, the correlation between items ranged from 0.099 to 0.611. The Kurdish version of the HPLP-II demonstrated initial reliability and validity. It is a precious tool to evaluate and assess lifestyle and lifestyle interventions built to improve the health of Kurds.

  7. PSI-BLAST-ISS: an intermediate sequence search tool for estimation of the position-specific alignment reliability

    Directory of Open Access Journals (Sweden)

    Venclovas Česlovas

    2005-07-01

    Full Text Available Abstract Background Protein sequence alignments have become indispensable for virtually any evolutionary, structural or functional study involving proteins. Modern sequence search and comparison methods combined with rapidly increasing sequence data often can reliably match even distantly related proteins that share little sequence similarity. However, even highly significant matches generally may have incorrectly aligned regions. Therefore when exact residue correspondence is used to transfer biological information from one aligned sequence to another, it is critical to know which alignment regions are reliable and which may contain alignment errors. Results PSI-BLAST-ISS is a standalone Unix-based tool designed to delineate reliable regions of sequence alignments as well as to suggest potential variants in unreliable regions. The region-specific reliability is assessed by producing multiple sequence alignments in different sequence contexts followed by the analysis of the consistency of alignment variants. The PSI-BLAST-ISS output enables the user to simultaneously analyze alignment reliability between query and multiple homologous sequences. In addition, PSI-BLAST-ISS can be used to detect distantly related homologous proteins. The software is freely available at: http://www.ibt.lt/bioinformatics/iss. Conclusion PSI-BLAST-ISS is an effective reliability assessment tool that can be useful in applications such as comparative modelling or analysis of individual sequence regions. It favorably compares with the existing similar software both in the performance and functional features.

  8. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  9. Derelict fishing line provides a useful proxy for estimating levels of non-compliance with no-take marine reserves.

    Science.gov (United States)

    Williamson, David H; Ceccarelli, Daniela M; Evans, Richard D; Hill, Jos K; Russ, Garry R

    2014-01-01

    No-take marine reserves (NTMRs) are increasingly being established to conserve or restore biodiversity and to enhance the sustainability of fisheries. Although effectively designed and protected NTMR networks can yield conservation and fishery benefits, reserve effects often fail to manifest in systems where there are high levels of non-compliance by fishers (poaching). Obtaining reliable estimates of NTMR non-compliance can be expensive and logistically challenging, particularly in areas with limited or non-existent resources for conducting surveillance and enforcement. Here we assess the utility of density estimates and re-accumulation rates of derelict (lost and abandoned) fishing line as a proxy for fishing effort and NTMR non-compliance on fringing coral reefs in three island groups of the Great Barrier Reef Marine Park (GBRMP), Australia. Densities of derelict fishing line were consistently lower on reefs within old (>20 year) NTMRs than on non-NTMR reefs (significantly in the Palm and Whitsunday Islands), whereas line densities did not differ significantly between reefs in new NTMRs (5 years of protection) and non-NTMR reefs. A manipulative experiment in which derelict fishing lines were removed from a subset of the monitoring sites demonstrated that lines re-accumulated on NTMR reefs at approximately one third (32.4%) of the rate observed on non-NTMR reefs over a thirty-two month period. Although these inshore NTMRs have long been considered some of the best protected within the GBRMP, evidence presented here suggests that the level of non-compliance with NTMR regulations is higher than previously assumed.

  10. Derelict fishing line provides a useful proxy for estimating levels of non-compliance with no-take marine reserves.

    Directory of Open Access Journals (Sweden)

    David H Williamson

    Full Text Available No-take marine reserves (NTMRs are increasingly being established to conserve or restore biodiversity and to enhance the sustainability of fisheries. Although effectively designed and protected NTMR networks can yield conservation and fishery benefits, reserve effects often fail to manifest in systems where there are high levels of non-compliance by fishers (poaching. Obtaining reliable estimates of NTMR non-compliance can be expensive and logistically challenging, particularly in areas with limited or non-existent resources for conducting surveillance and enforcement. Here we assess the utility of density estimates and re-accumulation rates of derelict (lost and abandoned fishing line as a proxy for fishing effort and NTMR non-compliance on fringing coral reefs in three island groups of the Great Barrier Reef Marine Park (GBRMP, Australia. Densities of derelict fishing line were consistently lower on reefs within old (>20 year NTMRs than on non-NTMR reefs (significantly in the Palm and Whitsunday Islands, whereas line densities did not differ significantly between reefs in new NTMRs (5 years of protection and non-NTMR reefs. A manipulative experiment in which derelict fishing lines were removed from a subset of the monitoring sites demonstrated that lines re-accumulated on NTMR reefs at approximately one third (32.4% of the rate observed on non-NTMR reefs over a thirty-two month period. Although these inshore NTMRs have long been considered some of the best protected within the GBRMP, evidence presented here suggests that the level of non-compliance with NTMR regulations is higher than previously assumed.

  11. Test Reliability at the Individual Level

    Science.gov (United States)

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107

  12. Test Reliability at the Individual Level.

    Science.gov (United States)

    Hu, Yueqin; Nesselroade, John R; Erbacher, Monica K; Boker, Steven M; Burt, S Alexandra; Keel, Pamela K; Neale, Michael C; Sisk, Cheryl L; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS.

  13. Assessing the Benefits Provided by SWOT Data Towards Estimating Reservoir Residence Time in the Mekong River Basin

    Science.gov (United States)

    Bonnema, M.; Hossain, F.

    2016-12-01

    The Mekong River Basin is undergoing rapid hydropower development. Nine dams are planned on the main stem of the Mekong and many more on its extensive tributaries. Understanding the effects that current and future dams have on the river system and water cycle as a whole is vital for the millions of people living in the basin. reservoir residence time, the amount of time water spends stored in a reservoir, is a key parameter in investigating these impacts. The forthcoming Surface Water and Ocean Topography (SWOT) mission is poised to provide an unprecedented amount of surface water observations. SWOT, when augmented by current satellite missions, will provide the necessary information to estimate the residence time of reservoirs across the entire basin in a more comprehensive way than ever before. In this study, we first combine observations from current satellite missions (altimetry, spectral imaging, precipitation) to estimate the residence times of existing reservoirs. We then use this information to project how future reservoirs will increase the residence time of the river system. Next, we explore how SWOT observations can be used to improve residence time estimation by examining the accuracy of reservoir surface area and elevation observations as well as the accuracy of river discharge observations.

  14. Factor Structure and Reliability of the Childhood Trauma Questionnaire and Prevalence Estimates of Trauma for Male and Female Street Youth

    Science.gov (United States)

    Forde, David R.; Baron, Stephen W.; Scher, Christine D.; Stein, Murray B.

    2012-01-01

    This study examines the psychometric properties of the Childhood Trauma Questionnaire short form (CTQ-SF) with street youth who have run away or been expelled from their homes (N = 397). Internal reliability coefficients for the five clinical scales ranged from 0.65 to 0.95. Confirmatory Factor Analysis (CFA) was used to test the five-factor…

  15. Dental age estimation: Comparison of reliability between Malay formula of Demirjian method and Malay formula of Cameriere method

    Science.gov (United States)

    Alghali, R.; Kamaruddin, A. F.; Mokhtar, N.

    2016-12-01

    Introduction: The application of forensic odontology using teeth and bones becomes the most commonly used methods to determine age of unknown individuals. Objective: The aim of this study was to determine the reliability of Malay formula of Demirjian and Malay formula of Cameriere methods in determining the dental age that is closely matched with the chronological age of Malay children in Kepala Batas region. Methodology: This is a retrospective cross-sectional study. 126 good quality dental panoramic radiographs (DPT) of healthy Malay children aged 8-16 years (49 boys and 77 girls) were selected and measured. All radiographs were taken at Dental Specialist Clinic, Advanced Medical and Dental Institute, Universiti Sains Malaysia. The measurements were carried out using new Malay formula of both Demirjian and Cameriere methods by calibrated examiner. Results: The intraclass correlation coefficient (ICC) analysis between the chronological age with Demirjian and Cameriere has been calculated. The Demirjian method has shown a better percentage (91.4%) of ICC compared to Cameriere (89.2%) which also indicates a high association, with good reliability. However, by comparing between Demirjian and Cameriere, it can be concluded that Demirjian has a better reliability. Conclusion: Thus, the results suggested that, modified Demirjian method is more reliable than modified Cameriere method among the population in Kepala Batas region.

  16. The oil - paper insulation reliability estimation of 420 kV transformer concerning thermal stresses during exploitation conditions

    Directory of Open Access Journals (Sweden)

    Al-Musawi Safaa Ismael

    2016-01-01

    Full Text Available The calculating of reached ageing based on the history of loading according of International Electrotechnical Commission standard algorithm is the first task. In order to verify the obtained results, measurements of polymerization index were made on 28 paper samples taken directly from low voltage terminals (winding ends and bus connections of the transformer under test rated 380 MVA, 2´15,75 kV/420 kV. The complete procedure of paper sample locations and taking off is described, thereby providing a manner of how this should be done, determined by specific conditions of the transformer under test. Furthermore, the determination of limit viscosity and using its relationship function with polymerization index are explained together. Comparison is made with those of liquid chromatography of oil. The results of particle sort and size analysis are shown. Finally, an estimation of the transformer life remainder is made, which is of paramount importance when defining the steps that have to be made either in revitalization process or in transformer replacement planning.

  17. A fast-reliable methodology to estimate the concentration of rutile or anatase phases of TiO2

    Science.gov (United States)

    Zanatta, A. R.

    2017-07-01

    Titanium-dioxide (TiO2) is a low-cost, chemically inert material that became the basis of many modern applications ranging from, for example, cosmetics to photovoltaics. TiO2 exists in three different crystal phases - Rutile, Anatase and, less commonly, Brookite - and, in most of the cases, the presence or relative amount of these phases are essential to decide the TiO2 final application and its related efficiency. Traditionally, X-ray diffraction has been chosen to study TiO2 and provides both the phases identification and the Rutile-to-Anatase ratio. Similar information can be achieved from Raman scattering spectroscopy that, additionally, is versatile and involves rather simple instrumentation. Motivated by these aspects this work took into account various TiO2 Rutile+Anatase powder mixtures and their corresponding Raman spectra. Essentially, the method described here was based upon the fact that the Rutile and Anatase crystal phases have distinctive phonon features, and therefore, the composition of the TiO2 mixtures can be readily assessed from their Raman spectra. The experimental results clearly demonstrate the suitability of Raman spectroscopy in estimating the concentration of Rutile or Anatase in TiO2 and is expected to influence the study of TiO2-related thin films, interfaces, systems with reduced dimensions, and devices like photocatalytic and solar cells.

  18. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of

  19. Global trends in health research and development expenditures--the challenge of making reliable estimates for international comparison

    NARCIS (Netherlands)

    Young, A.J.; Terry, R.F.; Rottingen, J.A.; Viergever, R.F.

    2015-01-01

    Better estimates of changes in the level and structure of national, regional, and global expenditures on health research and development (R&D) are needed as an important source of information for advancing countries' health research policies. However, such estimates are difficult to compile and

  20. A Comparison of Factor Score Estimation Methods in the Presence of Missing Data: Reliability and an Application to Nicotine Dependence

    Science.gov (United States)

    Estabrook, Ryne; Neale, Michael

    2013-01-01

    Factor score estimation is a controversial topic in psychometrics, and the estimation of factor scores from exploratory factor models has historically received a great deal of attention. However, both confirmatory factor models and the existence of missing data have generally been ignored in this debate. This article presents a simulation study…

  1. Reliability for a walk/run test to estimate aerobic capacity in a brain-injured population.

    Science.gov (United States)

    Vitale, A E; Jankowski, L W; Sullivan, S J

    1997-01-01

    The purpose of this study was to establish the test-retest reliability of a modified 20 m shuttle walk/run test of aerobic capacity for adults with traumatic brain injuries (TBI). A convenience sample of 18 TBI patients (16 males, two females) between 19 and 58 years of age, was tested using an externally paced, progressive, maximal shuttle walk/run, on two separate occasions, within a 1-week period. The test involving walking or running a 20 m shuttle course while maintaining the pace determined by signals from a prerecorded audiotape. The initial slow walking pace (2.4 km/h) was increased gradually, each minute, until the patient could not continue. Statistical analyses revealed excellent reliability for the number of levels completed (ICC = 0.976), total walk/run test time (S) (ICC = 0.983) and maximal heart rate attained during the final level (bpm) (ICC = 0.964). Although small but significant increases were noted for the number of levels completed during the second test, the modified 20 m shuttle walk/run seems to be a reliable field test which may be useful for assessing the aerobic capacity of brain-injured adults.

  2. Bayesian reliability analysis for non-periodic inspection with estimation of uncertain parameters; Bayesian shinraisei kaiseki wo tekiyoshita hiteiki kozo kensa ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Itagaki, H. [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H.; Ito, S. [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M.

    1996-12-31

    Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.

  3. Estimating the extent of the heroin problem in Slovenia: application of the key informant approach and the nomination technique where there are no other reliable sources of information.

    Science.gov (United States)

    Nolimal, D

    1996-01-01

    Current evidence reveals a continuing upward trend in the misuse of illicit drugs in Slovenia. However, the science of estimating the prevalence of drug abuse and related problems is still undeveloped. Because of current data gathering practices, the data that are available are often of poor quality. In this paper the author describes two methods for estimating the prevalence of heroin abuse, the key informant approach and the nomination technique, which were used because there were no other reliable sources of information. These methods produced estimates and brought to light a number of problems that researchers would have to solve in their pursuit of more reliable, relevant and useful data. However, speculating about the extent of illicit drug use in the country is still problematic. Basic data collection and analyses at the national level must be improved. It is of vital importance to develop strategies and methods for obtaining estimates and thus more adequate information on which to base demand reduction strategies, to increase the number of epidemiologists and to establish a central information unit in the country.

  4. Prepositioned Stocks: Marine Corps Needs to Improve Cost Estimate Reliability and Oversight of Inventory Systems for Equipment in Norway

    Science.gov (United States)

    2015-09-01

    System - Marine Corps lacks a warehousing application and other data management capabilities that Blount Island Command needs to effectively manage...average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...documentation did not include the source data used to develop the estimates or the calculations performed and estimating methodologies used. Marine Corps

  5. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    previously published pediatric patient doses that accounted for patient size in their dose calculation, and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusions: For organs fully covered within the scan volume, the average correlation of SSDE and organ absolute dose was found to be better than ±10%. In addition, this study provides a complete list of organ dose correlation factors (CF{sub SSDE}{sup organ}) for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  6. Medical care price indexes for patients with employer-provided insurance: nationally representative estimates from MarketScan Data.

    Science.gov (United States)

    Dunn, Abe; Liebman, Eli; Pack, Sarah; Shapiro, Adam Hale

    2013-06-01

    Commonly observed shifts in the utilization of medical care services to treat diseases may pose problems for official price indexes at the Bureau of Labor Statistics (BLS) that do not account for service shifts. We examine how these shifts may lead to different price estimates than those observed in official price statistics at the BLS. We use a convenience sample of enrollees with employer-provided insurance from the MarketScan database for the years 2003 to 2007. Population weights that consider the age, sex, and geographic distribution of enrollees are assigned to construct representative estimates. We compare two types of price indexes: (1) a Service Price Index (SPI) that is similar to the BLS index, which holds services fixed and measures the prices of the underlying treatments; (2) a Medical Care Expenditure Index (MCE) that measures the cost of treating diseases and allows for utilization shifts. Over the entire period of study the CAGR of the SPI grows 0.7 percentage points faster than the preferred MCE index. Our findings suggest that the health component of inflation may be overstated by 0.7 percentage points per year, and real GDP growth may be understated by a similar amount. However, more work may be necessary to precisely replicate the indexes of the BLS to obtain a more accurate measure of these price differences. © Health Research and Educational Trust.

  7. Estimated validity and reliability of on-board diagnostics for older vehicles: comparison with remote sensing observations.

    Science.gov (United States)

    Supnithadnaporn, Anupit; Noonan, Douglas S; Samoylov, Alexander; Rodgers, Michael O

    2011-10-01

    Based on requirements under the Clean Air Act Amendments of 1990, most state vehicle inspection and maintenance (I/M) programs have, since 2002, replaced the tailpipe emission testing with the on-board diagnostic (OBD) II testing for 1996 model and newer vehicles. This test relies on the OBD II system to give the pass or fail result, depending on certain conditions that might cause the vehicle to emit pollution 1.5 times higher than the regulated standard. The OBD II system is a computer and sensors installed in the vehicle to monitor the emission control units and signal if there is any malfunction. As a vehicle ages, its engine, pollution control units, and OBD II system deteriorate. Because the OBD II system's durability directly influences the test outcome, it is important to examine the fleetwide trend in the OBD II test results in comparison with an alternative measure of identifying high emitting vehicles. This study investigates whether the validity and reliability of the OBD II test is related to the age of the OBD II system installed in the fleet. Using Atlanta's I/M testing records and remote sensing device (RSD) data collected during 2002-2005, this research establishes the convergent validity and interobserver reliability criteria for the OBD II test based on on-road emissions measured by RSDs. The study results show that older vehicles exhibit significantly lower RSD-OBD II outcome agreement than newer vehicles. This suggests that the validity and reliability of the OBD II test may decline in the older vehicle fleets. Explanations and possible confounding factors for these findings are discussed.

  8. Reliability of student midwives' visual estimate of blood loss in the immediate postpartum period: a cross-sectional study.

    Science.gov (United States)

    Parayre, Inès; Rivière, Olivier; Debost-Legrand, Anne; Lémery, Didier; Vendittelli, Françoise

    2015-12-01

    In France, postpartum hemorrhage (blood loss≥500mL in the first 24h postpartum) is the leading direct obstetric cause of maternal mortality. In French practice, PPH is mainly diagnosed by a quantitative assessment of blood loss, performed by subjective methods such as visual estimates. Various studies have concluded that visual estimates are imprecise, tend to underestimate blood loss, and thus to delay diagnosis of PPH. The principal objective of this study was to assess the accuracy of visual estimates of blood loss by student midwives. The secondary objectives were to study intraobserver agreement of these assessments, to assess the accuracy of visual estimates for threshold values, and to look for a region effect. A cross-sectional multicentre study. All French midwifery schools (n=35). Volunteer French student midwives at their fifth (final) year (n=463). The online questionnaire contained 16 photographs (8 different, each presented twice) of simulated volumes of blood loss (100, 150, 200, 300, 500, 850, 1000, and 1500mL). A 50-mL reference standard for calibration accompanied each photograph. Only one answer could be selected among the 7 choices offered for each photograph. Comparisons used χ(2) and Kappa tests. The participation rate was 48.43% (463/956), and 7.408 visual estimates were collected. Estimates were accurate for 35.34% of the responses. The reproducibility rate for the visual estimates (0.17≤к≤0.48) and for the accurate visual estimates (0.11≤к≤0.55) were moderate for 4 of the 8 volumes (100, 300, 1000, and 1500mL). The percentage of accurate responses was significantly higher for volumes≤300mL than for those ≥500mL (52.94% vs. 17.17%, p<0.0001) and those ≥1000mL (52.94% vs. 18.30%, p<0.0001). The percentage of accurate responses varied between the regions (p=0.042). Despite the help of a visual aid, both the accuracy and reproducibility of the visual estimates were low. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Inter-Method Reliability of School Effectiveness Measures: A Comparison of Value-Added and Regression Discontinuity Estimates

    Science.gov (United States)

    Perry, Thomas

    2017-01-01

    Value-added (VA) measures are currently the predominant approach used to compare the effectiveness of schools. Recent educational effectiveness research, however, has developed alternative approaches including the regression discontinuity (RD) design, which also allows estimation of absolute school effects. Initial research suggests RD is a viable…

  10. Reliability Estimation with Uncertainties Consideration for High Power IGBTs in 2.3 MW Wind Turbine Converter System

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Ma, Ke

    2012-01-01

    . Minimum, maximum and average junction temperatures profiles for the grid side IGBTs are estimated at each wind speed input values. The selected failure mechanism is the crack propagation in solder joint under the silicon die. Based on junction temperature profiles and physics of failure model...

  11. Impact of whole-genome amplification on the reliability of pre-transfer cattle embryo breeding value estimates.

    Science.gov (United States)

    Shojaei Saadi, Habib A; Vigneault, Christian; Sargolzaei, Mehdi; Gagné, Dominic; Fournier, Éric; de Montera, Béatrice; Chesnais, Jacques; Blondin, Patrick; Robert, Claude

    2014-10-12

    Genome-wide profiling of single-nucleotide polymorphisms is receiving increasing attention as a method of pre-implantation genetic diagnosis in humans and of commercial genotyping of pre-transfer embryos in cattle. However, the very small quantity of genomic DNA in biopsy material from early embryos poses daunting technical challenges. A reliable whole-genome amplification (WGA) procedure would greatly facilitate the procedure. Several PCR-based and non-PCR based WGA technologies, namely multiple displacement amplification, quasi-random primed library synthesis followed by PCR, ligation-mediated PCR, and single-primer isothermal amplification were tested in combination with different DNA extractions protocols for various quantities of genomic DNA inputs. The efficiency of each method was evaluated by comparing the genotypes obtained from 15 cultured cells (representative of an embryonic biopsy) to unamplified reference gDNA. The gDNA input, gDNA extraction method and amplification technology were all found to be critical for successful genome-wide genotyping. The selected WGA platform was then tested on embryo biopsies (n = 226), comparing their results to that of biopsies collected after birth. Although WGA inevitably leads to a random loss of information and to the introduction of erroneous genotypes, following genomic imputation the resulting genetic index of both sources of DNA were highly correlated (r = 0.99, P<0.001). It is possible to generate high-quality DNA in sufficient quantities for successful genome-wide genotyping starting from an early embryo biopsy. However, imputation from parental and population genotypes is a requirement for completing and correcting genotypic data. Judicious selection of the WGA platform, careful handling of the samples and genomic imputation together, make it possible to perform extremely reliable genomic evaluations for pre-transfer embryos.

  12. A dynamic neuro-fuzzy model providing bio-state estimation and prognosis prediction for wearable intelligent assistants.

    Science.gov (United States)

    Wang, Yu; Winters, Jack M

    2005-06-28

    Intelligent management of wearable applications in rehabilitation requires an understanding of the current context, which is constantly changing over the rehabilitation process because of changes in the person's status and environment. This paper presents a dynamic recurrent neuro-fuzzy system that implements expert-and evidence-based reasoning. It is intended to provide context-awareness for wearable intelligent agents/assistants (WIAs). The model structure includes the following types of signals: inputs, states, outputs and outcomes. Inputs are facts or events which have effects on patients' physiological and rehabilitative states; different classes of inputs (e.g., facts, context, medication, therapy) have different nonlinear mappings to a fuzzy "effect." States are dimensionless linguistic fuzzy variables that change based on causal rules, as implemented by a fuzzy inference system (FIS). The FIS, with rules based on expertise and evidence, essentially defines the nonlinear state equations that are implemented by nuclei of dynamic neurons. Outputs, a function of weighing of states and effective inputs using conventional or fuzzy mapping, can perform actions, predict performance, or assist with decision-making. Outcomes are scalars to be extremized that are a function of outputs and states. The first example demonstrates setup and use for a large-scale stroke neurorehabilitation application (with 16 inputs, 12 states, 5 outputs and 3 outcomes), showing how this modelling tool can successfully capture causal dynamic change in context-relevant states (e.g., impairments, pain) as a function of input event patterns (e.g., medications). The second example demonstrates use of scientific evidence to develop rule-based dynamic models, here for predicting changes in muscle strength with short-term fatigue and long-term strength-training. A neuro-fuzzy modelling framework is developed for estimating rehabilitative change that can be applied in any field of rehabilitation

  13. A Dynamic Neuro-Fuzzy Model Providing Bio-State Estimation and Prognosis Prediction for Wearable Intelligent Assistants

    Directory of Open Access Journals (Sweden)

    Winters Jack M

    2005-06-01

    Full Text Available Abstract Background Intelligent management of wearable applications in rehabilitation requires an understanding of the current context, which is constantly changing over the rehabilitation process because of changes in the person's status and environment. This paper presents a dynamic recurrent neuro-fuzzy system that implements expert-and evidence-based reasoning. It is intended to provide context-awareness for wearable intelligent agents/assistants (WIAs. Methods The model structure includes the following types of signals: inputs, states, outputs and outcomes. Inputs are facts or events which have effects on patients' physiological and rehabilitative states; different classes of inputs (e.g., facts, context, medication, therapy have different nonlinear mappings to a fuzzy "effect." States are dimensionless linguistic fuzzy variables that change based on causal rules, as implemented by a fuzzy inference system (FIS. The FIS, with rules based on expertise and evidence, essentially defines the nonlinear state equations that are implemented by nuclei of dynamic neurons. Outputs, a function of weighing of states and effective inputs using conventional or fuzzy mapping, can perform actions, predict performance, or assist with decision-making. Outcomes are scalars to be extremized that are a function of outputs and states. Results The first example demonstrates setup and use for a large-scale stroke neurorehabilitation application (with 16 inputs, 12 states, 5 outputs and 3 outcomes, showing how this modelling tool can successfully capture causal dynamic change in context-relevant states (e.g., impairments, pain as a function of input event patterns (e.g., medications. The second example demonstrates use of scientific evidence to develop rule-based dynamic models, here for predicting changes in muscle strength with short-term fatigue and long-term strength-training. Conclusion A neuro-fuzzy modelling framework is developed for estimating

  14. A decision tree model to estimate the value of information provided by a groundwater quality monitoring network

    Science.gov (United States)

    Khader, A. I.; Rosenberg, D. E.; McKee, M.

    2013-05-01

    Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i) ignore the health risk of nitrate-contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome) is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs

  15. A decision tree model to estimate the value of information provided by a groundwater quality monitoring network

    Directory of Open Access Journals (Sweden)

    A. I. Khader

    2013-05-01

    Full Text Available Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i ignore the health risk of nitrate-contaminated water, (ii switch to alternative water sources such as bottled water, or (iii implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012. The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water

  16. The health system burden of chronic disease care: an estimation of provider costs of selected chronic diseases in Uganda.

    Science.gov (United States)

    Settumba, Stella Nalukwago; Sweeney, Sedona; Seeley, Janet; Biraro, Samuel; Mutungi, Gerald; Munderi, Paula; Grosskurth, Heiner; Vassall, Anna

    2015-06-01

    To explore the chronic disease services in Uganda: their level of utilisation, the total service costs and unit costs per visit. Full financial and economic cost data were collected from 12 facilities in two districts, from the provider's perspective. A combination of ingredients-based and step-down allocation costing approaches was used. The diseases under study were diabetes, hypertension, chronic obstructive pulmonary disease (COPD), epilepsy and HIV infection. Data were collected through a review of facility records, direct observation and structured interviews with health workers. Provision of chronic care services was concentrated at higher-level facilities. Excluding drugs, the total costs for NCD care fell below 2% of total facility costs. Unit costs per visit varied widely, both across different levels of the health system, and between facilities of the same level. This variability was driven by differences in clinical and drug prescribing practices. Most patients reported directly to higher-level facilities, bypassing nearby peripheral facilities. NCD services in Uganda are underfunded particularly at peripheral facilities. There is a need to estimate the budget impact of improving NCD care and to standardise treatment guidelines. © 2015 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.

  17. Reliability of activation cross sections for estimation of shutdown dose rate in the ITER port cell and port interspace

    Directory of Open Access Journals (Sweden)

    García Raquel

    2017-01-01

    Full Text Available This paper explores the quality of available activation cross section (XS data for accurate Shutdown Dose Rate (SDDR prediction in the ITER Port Cell and Port Interspace areas, where different maintenance activities are foreseen. For this purpose the EAF library (2007 and 2010 versions has been investigated, as it is typically used by the ITER community. Based on both reports/papers on SDDR in ITER and own calculations, major nuclides contributing to the SDDR coming from the activation of i relevant materials placed in ITER and ii candidate materials for the bioshield plug as L2N and barite concretes, are identified. Then, relevant production pathways are obtained. EAF XS quality for all pathways is checked following the procedure used for validating and testing the successive EAF versions. Also, possible improvements from using the TENDL-2015 library are assessed by comparing EAF and TENDL XS with available differential experimental data from EXFOR. Results point out that most of the activation XS related to materials currently placed in ITER are reliable, and only a few need improvement. Also, many of the XS related to both L2N and barite concretes need further work for validation.

  18. Reliability of activation cross sections for estimation of shutdown dose rate in the ITER port cell and port interspace

    Science.gov (United States)

    García, Raquel; García, Mauricio; Ogando, Francisco; Pampin, Raúl; Sanz, Javier

    2017-09-01

    This paper explores the quality of available activation cross section (XS) data for accurate Shutdown Dose Rate (SDDR) prediction in the ITER Port Cell and Port Interspace areas, where different maintenance activities are foreseen. For this purpose the EAF library (2007 and 2010 versions) has been investigated, as it is typically used by the ITER community. Based on both reports/papers on SDDR in ITER and own calculations, major nuclides contributing to the SDDR coming from the activation of i) relevant materials placed in ITER and ii) candidate materials for the bioshield plug as L2N and barite concretes, are identified. Then, relevant production pathways are obtained. EAF XS quality for all pathways is checked following the procedure used for validating and testing the successive EAF versions. Also, possible improvements from using the TENDL-2015 library are assessed by comparing EAF and TENDL XS with available differential experimental data from EXFOR. Results point out that most of the activation XS related to materials currently placed in ITER are reliable, and only a few need improvement. Also, many of the XS related to both L2N and barite concretes need further work for validation.

  19. A reliability measure of protein-protein interactions and a reliability measure-based search engine.

    Science.gov (United States)

    Park, Byungkyu; Han, Kyungsook

    2010-02-01

    Many methods developed for estimating the reliability of protein-protein interactions are based on the topology of protein-protein interaction networks. This paper describes a new reliability measure for protein-protein interactions, which does not rely on the topology of protein interaction networks, but expresses biological information on functional roles, sub-cellular localisations and protein classes as a scoring schema. The new measure is useful for filtering many spurious interactions, as well as for estimating the reliability of protein interaction data. In particular, the reliability measure can be used to search protein-protein interactions with the desired reliability in databases. The reliability-based search engine is available at http://yeast.hpid.org. We believe this is the first search engine for interacting proteins, which is made available to public. The search engine and the reliability measure of protein interactions should provide useful information for determining proteins to focus on.

  20. Reliable and Damage-Free Estimation of Resistivity of ZnO Thin Films for Photovoltaic Applications Using Photoluminescence Technique

    Directory of Open Access Journals (Sweden)

    N. Poornima

    2013-01-01

    Full Text Available This work projects photoluminescence (PL as an alternative technique to estimate the order of resistivity of zinc oxide (ZnO thin films. ZnO thin films, deposited using chemical spray pyrolysis (CSP by varying the deposition parameters like solvent, spray rate, pH of precursor, and so forth, have been used for this study. Variation in the deposition conditions has tremendous impact on the luminescence properties as well as resistivity. Two emissions could be recorded for all samples—the near band edge emission (NBE at 380 nm and the deep level emission (DLE at ~500 nm which are competing in nature. It is observed that the ratio of intensities of DLE to NBE (/ can be reduced by controlling oxygen incorporation in the sample. - measurements indicate that restricting oxygen incorporation reduces resistivity considerably. Variation of / and resistivity for samples prepared under different deposition conditions is similar in nature. / was always less than resistivity by an order for all samples. Thus from PL measurements alone, the order of resistivity of the samples can be estimated.

  1. Estimation technique of corrective effects for forecasting of reliability of the designed and operated objects of the generating systems

    Science.gov (United States)

    Truhanov, V. N.; Sultanov, M. M.

    2017-11-01

    In the present article researches of statistical material on the refusals and malfunctions influencing operability of heat power installations have been conducted. In this article the mathematical model of change of output characteristics of the turbine depending on number of the refusals revealed in use has been presented. The mathematical model is based on methods of mathematical statistics, probability theory and methods of matrix calculation. The novelty of this model is that it allows to predict the change of the output characteristic in time, and the operating influences have been presented in an explicit form. As desirable dynamics of change of the output characteristic (function, reliability) the law of distribution of Veybull which is universal is adopted since at various values of parameters it turns into other types of distributions (for example, exponential, normal, etc.) It should be noted that the choice of the desirable law of management allows to determine the necessary management parameters with use of the saved-up change of the output characteristic in general. The output characteristic can be changed both on the speed of change of management parameters, and on acceleration of change of management parameters. In this article the technique of an assessment of the pseudo-return matrix has been stated in detail by the method of the smallest squares and the standard Microsoft Excel functions. Also the technique of finding of the operating effects when finding restrictions both for the output characteristic, and on management parameters has been considered. In the article the order and the sequence of finding of management parameters has been stated. A concrete example of finding of the operating effects in the course of long-term operation of turbines has been shown.

  2. Opportunities for measuring wheelchair kinematics in match settings; reliability of a three inertial sensor configuration

    NARCIS (Netherlands)

    van der Slikke, R.M.A.; Berger, M.A.; Bregman, D.J.J.; Lagerberg, A.H.; Veeger, H.E.J.

    2015-01-01

    Knowledge of wheelchair kinematics during a match is prerequisite for performance improvement in wheelchair basketball. Unfortunately, no measurement system providing key kinematic outcomes proved to be reliable in competition. In this study, the reliability of estimated wheelchair kinematics based

  3. Increasing capture efficiency of pallid sturgeon Scaphirhynchus albus (Forbes and Richardson, 1905) and the reliability of catch rate estimates

    Science.gov (United States)

    DeVries, R. J.; Hann, D. A.; Schramm, H.L.

    2015-01-01

    This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P < 0.01) and depth (D; P = 0.03) had significant effects on capture probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.

  4. Effects on the estimated cause-specific mortality fraction of providing physician reviewers with different formats of verbal autopsy data

    Directory of Open Access Journals (Sweden)

    Chow Clara

    2011-08-01

    a cause of death did not substantively influence the pattern of mortality estimated. Substantially abbreviated and simplified verbal autopsy questionnaires might provide robust information about high-level mortality patterns.

  5. Evaluation of Landsat-Based METRIC Modeling to Provide High-Spatial Resolution Evapotranspiration Estimates for Amazonian Forests

    Directory of Open Access Journals (Sweden)

    Izaya Numata

    2017-01-01

    Full Text Available While forest evapotranspiration (ET dynamics in the Amazon have been studied both as point estimates using flux towers, as well as spatially coarse surfaces using satellite data, higher resolution (e.g., 30 m resolution ET estimates are necessary to address finer spatial variability associated with forest biophysical characteristics and their changes by natural and human impacts. The objective of this study is to evaluate the potential of the Landsat-based METRIC (Mapping Evapotranspiration at high Resolution with Internalized Calibration model to estimate high-resolution (30 m forest ET by comparing to flux tower ET (FT ET data collected over seasonally dry tropical forests in Rondônia, the southwestern region of the Amazon. Analyses were conducted at daily, monthly and seasonal scales for the dry seasons (June–September for Rondônia of 2000–2002. Overall daily ET comparison between FT ET and METRIC ET across the study site showed r2 = 0.67 with RMSE = 0.81 mm. For seasonal ET comparison, METRIC-derived ET estimates showed an agreement with FT ET measurements during the dry season of r2 >0.70 and %MAE <15%. We also discuss some challenges and potential applications of METRIC for Amazonian forests.

  6. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  7. Developing Reliable Life Support for Mars

    Science.gov (United States)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and

  8. A computerised sampling strategy for therapeutic drug monitoring of lithium provides precise estimates and significantly reduces dose-finding time

    DEFF Research Database (Denmark)

    Høgberg, Lotte Christine Groth; Jürgens, Gesche; Zederkof, Vivian Wederking

    2012-01-01

    citrate. Bayesian approach was performed in the intervention groups, and estimation of lithium steady-state trough concentration was obtained from non-steady-state blood sample, collected about 12 hr after the first lithium study dose. The estimate was compared with the actually measured steady......-state concentration. In the control group, lithium monitoring was traditionally performed as steady-state blood sampling. Predicted and measured lithium concentrations were comparable. The desired lithium dose was reached significantly faster in the intervention group compared to control; 2.47 ± 2.22 days versus 9.......96 ± 11.24 days (mean ± S.D.) (p = 0.0003). Bayesian approach was an advantage for the clinicians as a fast and safe aid to obtain the optimal lithium treatment dose....

  9. A method making fewer assumptions gave the most reliable estimates of exposure-outcome associations in stratified case-cohort studies.

    Science.gov (United States)

    Jones, Edmund; Sweeting, Michael J; Sharp, Stephen J; Thompson, Simon G

    2015-12-01

    A case-cohort study is an efficient epidemiological study design for estimating exposure-outcome associations. When sampling of the subcohort is stratified, several methods of analysis are possible, but it is unclear how they compare. Our objective was to compare five analysis methods using Cox regression for this type of data, ranging from a crude model that ignores the stratification to a flexible one that allows nonproportional hazards and varying covariate effects across the strata. We applied the five methods to estimate the association between physical activity and incident type 2 diabetes using data from a stratified case-cohort study and also used artificial data sets to exemplify circumstances in which they can give different results. In the diabetes study, all methods except the method that ignores the stratification gave similar results for the hazard ratio associated with physical activity. In the artificial data sets, the more flexible methods were shown to be necessary when certain assumptions of the simpler models failed. The most flexible method gave reliable results for all the artificial data sets. The most flexible method is computationally straightforward, and appropriate whether or not key assumptions made by the simpler models are valid. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Secondary dentine as a sole parameter for age estimation: Comparison and reliability of qualitative and quantitative methods among North Western adult Indians

    Directory of Open Access Journals (Sweden)

    Jasbir Arora

    2016-06-01

    Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.

  11. Leveraging Cloud Technology to Provide a Responsive, Reliable and Scalable Backend for the Virtual Ice Sheet Laboratory Using the Ice Sheet System Model and Amazon's Elastic Compute Cloud

    Science.gov (United States)

    Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.

    2015-12-01

    The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.

  12. Power system state estimation

    CERN Document Server

    Ahmad, Mukhtar

    2012-01-01

    State estimation is one of the most important functions in power system operation and control. This area is concerned with the overall monitoring, control, and contingency evaluation of power systems. It is mainly aimed at providing a reliable estimate of system voltages. State estimator information flows to control centers, where critical decisions are made concerning power system design and operations. This valuable resource provides thorough coverage of this area, helping professionals overcome challenges involving system quality, reliability, security, stability, and economy.Engineers are

  13. Reliability of job-title based physical work exposures for the upper extremity: comparison to self-reported and observed exposure estimates.

    Science.gov (United States)

    Gardner, Bethany T; Lombardi, David A; Dale, Ann Marie; Franzblau, Alfred; Evanoff, Bradley A

    2010-08-01

    To evaluate the agreement between job-title based estimates for upper extremity physical work exposures and exposure estimates from work observation and worker self-report. Self-reported exposure questionnaires were completed by 972 workers, and exposure estimates based on worksite observation were completed for a subset of 396 workers. Job-title based estimates were obtained from O*NET, an American database of job demands. Agreement between self-reported, observed and job-title based physical work exposures was assessed using Spearman correlations and intraclass correlation coefficients. Job-title based exposure estimates from O*NET, self-reported and observer-rated exposures showed moderate to good levels of agreement for some upper extremity exposures, including lifting, forceful grip, use of vibrating tools and wrist bending. Job-title based physical work exposure variables may provide useful surrogate measures of upper extremity exposure data in the absence of other individual level data such as observed or self-reported exposure. Further validation of these data is necessary to determine the utility of the O*NET databases in future epidemiological studies.

  14. Social Media and Language Processing: How Facebook and Twitter Provide the Best Frequency Estimates for Studying Word Recognition.

    Science.gov (United States)

    Herdağdelen, Amaç; Marelli, Marco

    2017-05-01

    Corpus-based word frequencies are one of the most important predictors in language processing tasks. Frequencies based on conversational corpora (such as movie subtitles) are shown to better capture the variance in lexical decision tasks compared to traditional corpora. In this study, we show that frequencies computed from social media are currently the best frequency-based estimators of lexical decision reaction times (up to 3.6% increase in explained variance). The results are robust (observed for Twitter- and Facebook-based frequencies on American English and British English datasets) and are still substantial when we control for corpus size. © 2016 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  15. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  16. Estimation of the economic value of the ecosystem services provided by the Blue Nile Basin in Ethiopia

    NARCIS (Netherlands)

    Tesfaye, A.; Wolanios, N.; Brouwer, R.

    2016-01-01

    This paper aims to quantify and economically value the main ecosystem services provided by the Blue Nile basin in Ethiopia. It is the first study in its kind to do so in a consistent and comprehensive manner using the same valuation approach. Water flows are linked to corresponding economic market

  17. Gearbox Reliability Collaborative Update (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, S.; Keller, J.; Glinsky, C.

    2013-10-01

    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  18. Visual Analogue Scale for Anxiety and Amsterdam Preoperative Anxiety Scale Provide a Simple and Reliable Measurement of Preoperative Anxiety in Patients Undergoing Cardiac Surgery

    Directory of Open Access Journals (Sweden)

    Joaquín Hernández-Palazón

    2015-03-01

    Full Text Available Background: Anxiety is an emotional state characterized by apprehension and fear resulting from anticipation of a threatening event. Objectives: The present study aimed to analyze the incidence and level of preoperative anxiety in the patients scheduled for cardiac surgery by using a Visual Analogue Scale for Anxiety (VAS-A and Amsterdam Preoperative Anxiety and Information Scale (APAIS and to identify the influencing clinical factors. Patients and Methods: This prospective, longitudinal study was performed on 300 cardiac surgery patients in a single university hospital. The patients were assessed regarding their preoperative anxiety level using VAS-A, APAIS, and a set of specific anxiety-related questions. Their demographic features as well as their anesthetic and surgical characteristics (ASA physical status, EuroSCORE, preoperative Length of Stay (LoS, and surgical history were recorded, as well. Then, one-way ANOVA and t-test were applied along with odds ratio for risk assessment. Results: According to the results, 94% of the patients presented preoperative anxiety, with 37% developing high anxiety (VAS-A ≥ 7. Preoperative LoS > 2 days was the only significant risk factor for preoperative anxiety (odds ratio = 2.5, CI 95%, 1.3 - 5.1, P = 0.009. Besides, a positive correlation was found between anxiety level (APAISa and requirement of knowledge (APAISk. APAISa and APAISk scores were greater for surgery than for anesthesia. Moreover, the results showed that the most common anxieties resulted from the operation, waiting for surgery, not knowing what is happening, postoperative pain, awareness during anesthesia, and not awakening from anesthesia. Conclusions: APAIS and VAS-A provided a quantitative assessment of anxiety and a specific qualitative questionnaire for preoperative anxiety in cardiac surgery. According to the results, preoperative LoS > 2 days and lack of information related to surgery were the risk factors for high anxiety levels.

  19. Estimating the effect of plant-provided food supplements on pest consumption by omnivorous predators: lessons from two coccinellid beetles.

    Science.gov (United States)

    Schuldiner-Harpaz, Tarryn; Coll, Moshe

    2017-05-01

    Plant-provided food supplements can influence biological pest control by omnivorous predators in two counteracting ways: they can (i) enhance predator populations, but (ii) reduce pest consumption by individual predators. Yet the majority of studies address only one of these aspects. Here, we first tested the influence of canola (Brassica napus L.) pollen supplements on the life history of two ladybeetle species: Hoppodamia variegata (Goeze) and Coccinella septempunctata (L.). We then developed a theoretical model to simulate total pest consumption in the presence and absence of pollen supplements. Supplementing a prey diet with canola pollen increased H. variegata larval survival from 50 to 82%, and C. septempunctata female oviposition by 1.6-fold. Model simulations revealed a greater benefit of pollen supplements when relying on C. septempunctata for pest suppression than on H. variegata. For these two predators, the tested pollen serves as an essential supplement to a diet of prey. However, the benefit of a mixed prey-pollen diet was not always sufficient to overcome individual decrease in pest consumption. Taken together, our study highlights the importance of addressing both positive and negative roles of plant-provided food supplements in considering the outcome for biological control efforts that rely on omnivorous predators. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  20. Reliability of digital reactor protection system based on extenics.

    Science.gov (United States)

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  1. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  2. Skinfold Prediction Equations Fail to Provide an Accurate Estimate of Body Composition in Elite Rugby Union Athletes of Caucasian and Polynesian Ethnicity.

    Science.gov (United States)

    Zemski, Adam J; Broad, Elizabeth M; Slater, Gary J

    2018-01-01

    Body composition in elite rugby union athletes is routinely assessed using surface anthropometry, which can be utilized to provide estimates of absolute body composition using regression equations. This study aims to assess the ability of available skinfold equations to estimate body composition in elite rugby union athletes who have unique physique traits and divergent ethnicity. The development of sport-specific and ethnicity-sensitive equations was also pursued. Forty-three male international Australian rugby union athletes of Caucasian and Polynesian descent underwent surface anthropometry and dual-energy X-ray absorptiometry (DXA) assessment. Body fat percent (BF%) was estimated using five previously developed equations and compared to DXA measures. Novel sport and ethnicity-sensitive prediction equations were developed using forward selection multiple regression analysis. Existing skinfold equations provided unsatisfactory estimates of BF% in elite rugby union athletes, with all equations demonstrating a 95% prediction interval in excess of 5%. The equations tended to underestimate BF% at low levels of adiposity, whilst overestimating BF% at higher levels of adiposity, regardless of ethnicity. The novel equations created explained a similar amount of variance to those previously developed (Caucasians 75%, Polynesians 90%). The use of skinfold equations, including the created equations, cannot be supported to estimate absolute body composition. Until a population-specific equation is established that can be validated to precisely estimate body composition, it is advocated to use a proven method, such as DXA, when absolute measures of lean and fat mass are desired, and raw anthropometry data routinely to derive an estimate of body composition change.

  3. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  4. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    Science.gov (United States)

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Bayesian system reliability assessment under fuzzy environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, H.-C

    2004-03-01

    The Bayesian system reliability assessment under fuzzy environments is proposed in this paper. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. The (conventional) Bayes estimation method will be used to create the fuzzy Bayes point estimator of system reliability by invoking the well-known theorem called 'Resolution Identity' in fuzzy sets theory. On the other hand, we also provide the computational procedures to evaluate the membership degree of any given Bayes point estimate of system reliability. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four subproblems for the purpose of simplifying computation. Finally, the subproblems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.

  6. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  7. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  8. Use of NMR logging to obtain estimates of hydraulic conductivity in the High Plains aquifer, Nebraska, USA

    National Research Council Canada - National Science Library

    Dlubac, Katherine; Knight, Rosemary; Song, Yi‐Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John

    2013-01-01

    NMR-logging provides reliable estimates of hydraulic conductivity in aquifers Provided equations to predict hydraulic conductivity in unconsolidated materials Problems acquiring advanced geophysical...

  9. SU-E-T-598: Parametric Equation for Quick and Reliable Estimate of Stray Neutron Doses in Proton Therapy and Application for Intracranial Tumor Treatments

    Energy Technology Data Exchange (ETDEWEB)

    Bonfrate, A; Farah, J; Sayah, R; Clairand, I [Institut de Radioprotection et de Surete Nucleaire (IRSN), Fontenay-aux-roses (France); De Marzi, L; Delacroix, S [Institut Curie Centre de Protontherapie d Orsay (CPO), Orsay (France); Herault, J [Centre Antoine Lacassagne (CAL) Cyclotron biomedical, Nice (France); Lee, C [National Cancer Institute, Rockville, MD (United States); Bolch, W [Univ Florida, Gainesville, FL (United States)

    2015-06-15

    Purpose: Development of a parametric equation suitable for a daily use in routine clinic to provide estimates of stray neutron doses in proton therapy. Methods: Monte Carlo (MC) calculations using the UF-NCI 1-year-old phantom were exercised to determine the variation of stray neutron doses as a function of irradiation parameters while performing intracranial treatments. This was done by individually changing the proton beam energy, modulation width, collimator aperture and thickness, compensator thickness and the air gap size while their impact on neutron doses were put into a single equation. The variation of neutron doses with distance from the target volume was also included in it. Then, a first step consisted in establishing the fitting coefficients by using 221 learning data which were neutron absorbed doses obtained with MC simulations while a second step consisted in validating the final equation. Results: The variation of stray neutron doses with irradiation parameters were fitted with linear, polynomial, etc. model while a power-law model was used to fit the variation of stray neutron doses with the distance from the target volume. The parametric equation fitted well MC simulations while establishing fitting coefficients as the discrepancies on the estimate of neutron absorbed doses were within 10%. The discrepancy can reach ∼25% for the bladder, the farthest organ from the target volume. Finally, the validation showed results in compliance with MC calculations since the discrepancies were also within 10% for head-and-neck and thoracic organs while they can reach ∼25%, again for pelvic organs. Conclusion: The parametric equation presents promising results and will be validated for other target sites as well as other facilities to go towards a universal method.

  10. Improving machinery reliability

    CERN Document Server

    Bloch, Heinz P

    1998-01-01

    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  11. Robust Cognitive-GN BER Estimator for Dynamic WDM Networks

    DEFF Research Database (Denmark)

    Borkowski, Robert; Caballero Jambrina, Antonio; Arlunno, Valeria

    2014-01-01

    We introduce and experimentally demonstrate a simple yet reliable and fast tool for estimating BER of lightpaths over uncompensated links. The model provides accurate estimates for capacity upgrade scenarios when modulation format order is increased.......We introduce and experimentally demonstrate a simple yet reliable and fast tool for estimating BER of lightpaths over uncompensated links. The model provides accurate estimates for capacity upgrade scenarios when modulation format order is increased....

  12. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  13. Measurement and Reliability of Response Inhibition

    Directory of Open Access Journals (Sweden)

    Eliza eCongdon

    2012-02-01

    Full Text Available Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a motor response that has already been initiated (reflected as Stop-signal reaction time (SSRT is a quantitative and heritable measure of interindividual variation in brain function. In order to examine the reliability of this measure, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation. Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings support the reliability of SSRT as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype.

  14. Stochastic Modeling of Long-Term and Extreme Value Estimation of Wind and Sea Conditions for Probabilistic Reliability Assessments of Wave Energy Devices

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2014-01-01

    for the significant wave height, the mean zero-crossing wave period and the wind speed for long-term and extreme estimations. The long-term estimation focuses on annual statistical distributions, the inter-annual variation of distribution parameters and the statistical uncertainty due to limited amount of data...

  15. MO-E-17A-04: Size-Specific Dose Estimate (SSDE) Provides a Simple Method to Calculate Organ Dose for Pediatric CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, B; Brady, S; Kaufman, R [St Jude Children' s Research Hospital, Memphis, TN (United States); Mirro, A [Washington University, St. Louis, MO (United States)

    2014-06-15

    Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  16. Skin temperature over the carotid artery provides an accurate noninvasive estimation of core temperature in infants and young children during general anesthesia.

    Science.gov (United States)

    Jay, Ollie; Molgat-Seon, Yannick; Chou, Shirley; Murto, Kimmo

    2013-12-01

    The accurate measurement of core temperature is an essential aspect of intraoperative management in children. Invasive measurement sites are accurate but carry some health risks and cannot be used in certain patients. An accurate form of noninvasive thermometry is therefore needed. Our aim was to develop, and subsequently validate, separate models for estimating core temperature using different skin temperatures with an individualized correction factor. Forty-eight pediatric patients (0-36 months) undergoing elective surgery were separated into a modeling group (MG, n = 28) and validation group (VG, n = 20). Skin temperature was measured over the carotid artery (Tsk_carotid ), upper abdomen (Tsk_abd ), and axilla (Tsk_axilla ), while nasopharyngeal temperature (Tnaso ) was measured as a reference. In the MG, derived models for estimating Tnaso were: Tsk_carotid  + 0.52; Tsk_abd  + (0.076[body mass] + 0.02); and Tsk_axilla  + (0.081[body mass]-0.66). After adjusting raw Tsk_carotid, Tsk_abd , and Tsk_axilla values in the independent VG using these models, the mean bias (Predicted Tnaso - Actual Tnaso [with 95% confidence intervals]) was +0.03[+0.53, -0.50]°C, -0.05[+1.02, -1.07]°C, and -0.06[+1.21, -1.28°C], respectively. The percentage of values within ±0.5°C of Tnaso was 93.2%, 75.4%, and 66.1% for Tsk_carotid, Tsk_abd , and Tsk_axilla , respectively. Sensitivity and specificity for detecting hypothermia (Tnaso  Skin temperature over the carotid artery, with a simple correction factor of +0.52°C, provides a viable noninvasive estimate of Tnaso in young children during elective surgery with a general anesthetic. © 2013 John Wiley & Sons Ltd.

  17. Parameter Estimation of a Reliability Model of Demand-Caused and Standby-Related Failures of Safety Components Exposed to Degradation by Demand Stress and Ageing That Undergo Imperfect Maintenance

    Directory of Open Access Journals (Sweden)

    S. Martorell

    2017-01-01

    Full Text Available One can find many reliability, availability, and maintainability (RAM models proposed in the literature. However, such models become more complex day after day, as there is an attempt to capture equipment performance in a more realistic way, such as, explicitly addressing the effect of component ageing and degradation, surveillance activities, and corrective and preventive maintenance policies. Then, there is a need to fit the best model to real data by estimating the model parameters using an appropriate tool. This problem is not easy to solve in some cases since the number of parameters is large and the available data is scarce. This paper considers two main failure models commonly adopted to represent the probability of failure on demand (PFD of safety equipment: (1 by demand-caused and (2 standby-related failures. It proposes a maximum likelihood estimation (MLE approach for parameter estimation of a reliability model of demand-caused and standby-related failures of safety components exposed to degradation by demand stress and ageing that undergo imperfect maintenance. The case study considers real failure, test, and maintenance data for a typical motor-operated valve in a nuclear power plant. The results of the parameters estimation and the adoption of the best model are discussed.

  18. Will circumcision provide even more protection from HIV to women and men? New estimates of the population impact of circumcision interventions.

    Science.gov (United States)

    Hallett, Timothy B; Alsallaq, Ramzi A; Baeten, Jared M; Weiss, Helen; Celum, Connie; Gray, Ron; Abu-Raddad, Laith

    2011-03-01

    Mathematical modelling has indicated that expansion of male circumcision services in high HIV prevalence settings can substantially reduce population-level HIV transmission. However, these projections need revision to incorporate new data on the effect of male circumcision on the risk of acquiring and transmitting HIV. Recent data on the effect of male circumcision during wound healing and the risk of HIV transmission to women were synthesised based on four trials of circumcision among adults and new observational data of HIV transmission rates in stable partnerships from men circumcised at younger ages. New estimates were generated for the impact of circumcision interventions in two mathematical models, representing the HIV epidemics in Zimbabwe and Kisumu, Kenya. The models did not capture the interaction between circumcision, HIV and other sexually transmitted infections. An increase in the risk of HIV acquisition and transmission during wound healing is unlikely to have a major impact of circumcision interventions. However, it was estimated that circumcision confers a 46% reduction in the rate of male-to-female HIV transmission. If this reduction begins 2 years after the procedure, the impact of circumcision is substantially enhanced and accelerated compared with previous projections with no such effect-increasing by 40% the infections averted by the intervention overall and doubling the number of infections averted among women. Communities, and especially women, may benefit much more from circumcision interventions than had previously been predicted, and these results provide an even greater imperative to increase scale-up of safe male circumcision services.

  19. Reliability in automotive ethernet networks

    DEFF Research Database (Denmark)

    Soares, Fabio L.; Campelo, Divanilson R.; Yan, Ying

    2015-01-01

    This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular.......This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular....

  20. On the reliability of a simple method for scoring phenotypes to estimate heritability: A case study with pupal color in Heliconius erato phyllis , Fabricius 1775 (Lepidoptera, Nymphalidae

    Directory of Open Access Journals (Sweden)

    Adriano Andrejew Ferreira

    2009-01-01

    Full Text Available In this paper, two methods for assessing the degree of melanization of pupal exuviae from the butterfly Heliconius erato phyllis , Fabricius 1775 (Lepidoptera, Nymphalidae, Heliconiini are compared. In the first method, which was qualitative, the exuviae were classified by scoring the degree of melanization, whereas in the second method, which was quantitative, the exuviae were classified by optical density followed by analysis with appropriate software. The heritability (h 2 of the degree of melanization was estimated by regression and analysis of variance. The estimates of h 2 were similar with both methods, indicating that the qualitative method could be particularly suitable for field work. The low estimates obtained for heritability may have resulted from the small sample size ( n = 7-18 broods, including the parents or from the allocation-priority hypothesis in which pupal color would be a lower priority trait compared to morphological traits and adequate larval development.

  1. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental...

  2. Fatigue damage estimation in non-linear systems using a combination of Monte Carlo simulation and the First Order Reliability Method

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2015-01-01

    For non-linear systems the estimation of fatigue damage under stochastic loadings can be rather time-consuming. Usually Monte Carlo simulation (MCS) is applied, but the coefficient-of-variation (COV) can be large if only a small set of simulations can be done due to otherwise excessive CPU time...

  3. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    components. Thus, models of reliability should be developed and applied in order to quantify the residual life of the components. Damage models based on physics of failure combined with stochastic models describing the uncertain parameters are imperative for development of cost-optimal decision tools...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied....... Further, reliability modeling of load sharing systems is considered and a theoretical model is proposed based on sequential order statistics and structural systems reliability methods. Procedures for reliability estimation are detailed and presented in a collection of research papers....

  4. Development of reliable pavement models.

    Science.gov (United States)

    2011-05-01

    The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...

  5. Does multi-slice CT provide reliable attenuation values if measured with low slice thickness and low tube current? Results of a phantom study; Sind mit der Mehrschicht-Computertomografie Dichtemessungen auch bei geringer Schichtdicke und niedrigem Roehrenstrom verlaesslich?

    Energy Technology Data Exchange (ETDEWEB)

    Kirchhof, K.; Bohndorf, K. [Klinik fuer Diagnostische Radiologie und Neuroradiologie, Klinikum Augsburg (Germany); Welzel, T. [Abt. klinische Radiologie, Radiologische Universitaetsklinik Heidelberg (Germany); Wagner, T. [Pathologisches Inst., Klinikum Augsburg (Germany); Behr, W. [Inst. fuer Laboratoriumsmedizin, Mikrobiologie und Umwelthygiene, Klinikum Augsburg (Germany)

    2008-01-15

    Purpose: to test whether CT with low slice thickness and low tube current provides reliable attenuation measurements. Materials and methods: using multi-slice CT and a phantom, we measured the attenuation values of thrombi with different proportions of erythrocytes, using a slice thickness of 1.25 mm, 2.5 mm, and 5 mm with tube currents of 200 mA, 300 mA, and 400 mA and a slice thickness of 0.625 mm with tube currents of 150 mA, 175 mA, and 200 mA. Differences in attenuation values and pixel noise between the three thrombi for tube current and slice thickness were statistically analyzed. Results: the attenuation values of all thrombi increased (p < 0.05) when the slice thickness decreased using a tube current of 200 mA or when the tube current decreased using a slice thickness of 1.25 mm. With higher tube currents and thicker slices, the CT values depended on the type of thrombus and the slice thickness. In slices with a thickness of 0.625 mm, the CT values decreased with the tube current in the mixed thrombus with a low proportion of erythrocytes and in the red thrombus (p < 0.05). The maximal difference in mean attenuation values was 4.3 HU with a slice thickness of 0.625 mm and 2.2 HU with a slice thickness of 1.25 mm. The pixel noise increased as the slice thickness decreased (p < 0.05) with the exception of the red thrombus, if reduced to 0.625 mm. The pixel noise also increased as the tube current decreased (p < 0.05) except in mixed thrombi measured with 0.625 mm. The maximal difference in mean standard deviation was 1.8 HU with a slice thickness of 1.25 mm. (orig.)

  6. Reliability of buildings with internal network

    Directory of Open Access Journals (Sweden)

    Dolganov Andrey

    2016-01-01

    Full Text Available During the course of performance, reliability of a building with internal network located in Western Administrative District of Moscow was examined and estimated. Reliability means probability of no failure in the initial time.

  7. Different reliability assessment approaches for wave energy converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kramer, Morten Mejlhede; Sørensen, John Dalsgaard

    2015-01-01

    Reliability assessments are of importance for wave energy converters (WECs) due to the fact that accessibility might be limited in case of failure and maintenance. These failure rates can be adapted by reliability considerations. There are two different approaches to how reliability can be estima...... be estimated: the so-called classical reliability theory and the probabilistic reliability theory. The classical reliability theory is often used for failure rate estimations of mechanical and electrical components, whereas the probabilistic reliability theory is …...

  8. Technical information report: Plasma melter operation, reliability, and maintenance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, D.W. [ed.

    1995-03-14

    This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences.

  9. On the Reliability of Optimization Results for Trigeneration Systems in Buildings, in the Presence of Price Uncertainties and Erroneous Load Estimation

    Directory of Open Access Journals (Sweden)

    Antonio Piacentino

    2016-12-01

    Full Text Available Cogeneration and trigeneration plants are widely recognized as promising technologies for increasing energy efficiency in buildings. However, their overall potential is scarcely exploited, due to the difficulties in achieving economic viability and the risk of investment related to uncertainties in future energy loads and prices. Several stochastic optimization models have been proposed in the literature to account for uncertainties, but these instruments share in a common reliance on user-defined probability functions for each stochastic parameter. Being such functions hard to predict, in this paper an analysis of the influence of erroneous estimation of the uncertain energy loads and prices on the optimal plant design and operation is proposed. With reference to a hotel building, a number of realistic scenarios is developed, exploring all the most frequent errors occurring in the estimation of energy loads and prices. Then, profit-oriented optimizations are performed for the examined scenarios, by means of a deterministic mixed integer linear programming algorithm. From a comparison between the achieved results, it emerges that: (i the plant profitability is prevalently influenced by the average “spark-spread” (i.e., ratio between electricity and fuel price and, secondarily, from the shape of the daily price profiles; (ii the “optimal sizes” of the main components are scarcely influenced by the daily load profiles, while they are more strictly related with the average “power to heat” and “power to cooling” ratios of the building.

  10. Reliability Analysis of Wind Turbines

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2008-01-01

    In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...

  11. Reliability of fat-free mass estimates derived from total-body electrical conductivity measurements as influenced by changes in extracellular fluid volume.

    Science.gov (United States)

    Cochran, W J; Fiorotto, M L; Sheng, H P; Klish, W J

    1989-01-01

    Our study addresses the concern that the relationship between total-body electrical conductivity (TOBEC) measurements and the fat-free mass (FFM) or total-body water (TBW) of an individual is altered if significant fluid and electrolyte changes occur. Body composition and TOBEC measurements were obtained from 11 healthy miniature piglets before and after an intraperitoneal injection of physiological saline. The procedure expanded the extracellular fluid (ECF) volume by 11.8-34.1%, which represented an average increase in TBW of 7.9%. The linear regressions that related the preinjection and postinjection estimates of TBW or FFM to the corresponding transformed TOBEC signals were the same. Thus, the prediction equations derived for the untreated piglets accurately predicted both TBW and FFM in the piglets whose volume was expanded. These data suggest that prediction equations derived from healthy subjects should be equally valid for subjects with altered fluid status.

  12. LiST modelling with monitoring data to estimate impact on child mortality of an ORS and zinc programme with public sector providers in Bihar, India.

    Science.gov (United States)

    Ayyanat, Jayachandran A; Harbour, Catherine; Kumar, Sanjeev; Singh, Manjula

    2018-01-05

    Many interventions have attempted to increase vulnerable and remote populations' access to ORS and zinc to reduce child mortality from diarrhoea. However, the impact of these interventions is difficult to measure. From 2010 to 15, Micronutrient Initiative (MI), worked with the public sector in Bihar, India to enable community health workers to treat and report uncomplicated child diarrhoea with ORS and zinc. We describe how we estimated programme's impact on child mortality with Lives Saved Tool (LiST) modelling and data from MI's management information system (MIS). This study demonstrates that using LiST modelling and MIS data are viable options for evaluating programmes to reduce child mortality. We used MI's programme monitoring data to estimate coverage rates and LiST modelling software to estimate programme impact on child mortality. Four scenarios estimated the effects of different rates of programme scale-up and programme coverage on estimated child mortality by measuring children's lives saved. The programme saved an estimated 806-975 children under-5 who had diarrhoea during five-year project phase. Increasing ORS and zinc coverage rates to 19.8% & 18.3% respectively under public sector coverage with effective treatment would have increased the programme's impact on child mortality and could have achieved the project goal of saving 4200 children's lives during the five-year programme. Programme monitoring data can be used with LiST modelling software to estimate coverage rates and programme impact on child mortality. This modelling approach may cost less and yield estimates sooner than directly measuring programme impact with population-based surveys. However, users must be cautious about relying on modelled estimates of impact and ensure that the programme monitoring data used is complete and precise about the programme aspects that are modelled. Otherwise, LiST may mis-estimate impact on child mortality. Further, LiST software may require modifications

  13. Metrology automation reliability

    Science.gov (United States)

    Chain, Elizabeth E.

    1996-09-01

    At Motorola's MOS-12 facility automated measurements on 200- mm diameter wafers proceed in a hands-off 'load-and-go' mode requiring only wafer loading, measurement recipe loading, and a 'run' command for processing. Upon completion of all sample measurements, the data is uploaded to the factory's data collection software system via a SECS II interface, eliminating the requirement of manual data entry. The scope of in-line measurement automation has been extended to the entire metrology scheme from job file generation to measurement and data collection. Data analysis and comparison to part specification limits is also carried out automatically. Successful integration of automated metrology into the factory measurement system requires that automated functions, such as autofocus and pattern recognition algorithms, display a high degree of reliability. In the 24- hour factory reliability data can be collected automatically on every part measured. This reliability data is then uploaded to the factory data collection software system at the same time as the measurement data. Analysis of the metrology reliability data permits improvements to be made as needed, and provides an accurate accounting of automation reliability. This reliability data has so far been collected for the CD-SEM (critical dimension scanning electron microscope) metrology tool, and examples are presented. This analysis method can be applied to such automated in-line measurements as CD, overlay, particle and film thickness measurements.

  14. Electronics reliability calculation and design

    CERN Document Server

    Dummer, Geoffrey W A; Hiller, N

    1966-01-01

    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  15. InSAR techniques for reliable deformation estimation in volcanic areas and a first glance of Tandem-DEM accuracy - test site El Hierro Island

    Science.gov (United States)

    Cong, X.; Eineder, M.; Fritz, T.

    2013-12-01

    The accuracy and availability of deformation measurements using InSAR techniques is limited due to decorrelation effects, atmospheric disturbances and the SAR side-looking geometry (layover and shadowing). In this talk, we present our recent research and achievements on advanced InSAR techniques in order to retrieve reliable deformation signals from active volcanoes using high resolution TerraSAR-X (TSX) images. Another highlight of this talk is the evaluation of an experimental TanDEM-X (TDX) RawDEM with a resolution of approximately 6 m in order to compensate the topographic phase. A volcanic test site which is currently highly active -El Hierro- has been selected to demonstrate the developed techniques: 1) PSI processing in volcanic areas using high resolution TSX images; 2) Mitigation of atmospheric delay distortions; 3) Fusion of multi-geometrical PSI clouds. In order to measure the deformation from 2011 to 2013 at El Hierro [1], two stacks of stripmap TSX Mission data have been acquired, one in ascending orbit and one in descending. Each stack has more than 25 scenes. More than 1.5 million PSs have been detected (SCR>3.0 dB). The stratified atmospheric delay for each acquisition has been integrated for the PSI reference network and, afterwards, interpolated and compensated for all PSs. A linear deformation model has been assumed for PSI processing. For the descending orbit stack, a relative deformation from -21.7 to 131.8 mm/y from Sep. 2011 to Jan. 2013 with respect to a reference point located on the northeast coast has been measured. On the one hand, the spatial variation of the deformation has a good agreement with the seismicity distribution [1]. On the other hand, the deformation magnitude agrees with in-situ GPS measurements [2]. In ascending orbit, the linear deformation rate varies from -22.8 to 90.9 mm/y. This different range of value is due to a scene acquired on Feb. 2010, which has been included in order to obtain the pre-seismic deformation

  16. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  17. Rapid exposure and loss estimates for the May 12, 2008 Mw 7.9 Wenchuan earthquake provided by the U.S. Geological Survey's PAGER system

    Science.gov (United States)

    Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.

    2008-01-01

    One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.

  18. EXPLOITATION RELIABILITY OF RECUPERATORS

    OpenAIRE

    A. M. Selutin; V. A. Zharanov; A. V. Tkachenko

    2006-01-01

    The investigations and experience of implementation of constructions, that provide increase of gas-tightness of heat-exchangers, are described in the article. The given variants of compensators of different types allow to increase the reliability of operation of systems of the fuel furnaces waste flue gases heat using.

  19. EXPLOITATION RELIABILITY OF RECUPERATORS

    Directory of Open Access Journals (Sweden)

    A. M. Selutin

    2006-01-01

    Full Text Available The investigations and experience of implementation of constructions, that provide increase of gas-tightness of heat-exchangers, are described in the article. The given variants of compensators of different types allow to increase the reliability of operation of systems of the fuel furnaces waste flue gases heat using.

  20. Architecture-Based Reliability Analysis of Web Services

    Science.gov (United States)

    Rahmani, Cobra Mariam

    2012-01-01

    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  1. Grid reliability

    CERN Document Server

    Saiz, P; Rocha, R; Andreeva, J

    2007-01-01

    We are offering a system to track the efficiency of different components of the GRID. We can study the performance of both the WMS and the data transfers At the moment, we have set different parts of the system for ALICE, ATLAS, CMS and LHCb. None of the components that we have developed are VO specific, therefore it would be very easy to deploy them for any other VO. Our main goal is basically to improve the reliability of the GRID. The main idea is to discover as soon as possible the different problems that have happened, and inform the responsible. Since we study the jobs and transfers issued by real users, we see the same problems that users see. As a matter of fact, we see even more problems than the end user does, since we are also interested in following up the errors that GRID components can overcome by themselves (like for instance, in case of a job failure, resubmitting the job to a different site). This kind of information is very useful to site and VO administrators. They can find out the efficien...

  2. The added value that increasing levels of diagnostic information provide in prognostic models to estimate hospital mortality for adult intensive care patients

    NARCIS (Netherlands)

    de Keizer, N. F.; Bonsel, G. J.; Goldfad, C.; Rowan, K. M.

    2000-01-01

    To investigate in a systematic, reproducible way the potential of adding increasing levels of diagnostic information to prognostic models for estimating hospital mortality. Prospective cohort study. Thirty UK intensive care units (ICUs) participating in the ICNARC Case Mix Programme. Eight thousand

  3. So You Think You Look Young? Matching Older Adults' Subjective Ages with Age Estimations Provided by Younger, Middle-Aged, and Older Adults

    Science.gov (United States)

    Kotter-Gruhn, Dana; Hess, Thomas M.

    2012-01-01

    Perceived age plays an important role in the context of age identity and social interactions. To examine how accurate individuals are in estimating how old they look and how old others are, younger, middle-aged, and older adults rated photographs of older target persons (for whom we had information about objective and subjective age) in terms of…

  4. How Reliable Is Laboratory Testing?

    Science.gov (United States)

    ... electronic reports to the healthcare provider by directly printing the report in a provider's office with a ... increase the reliability of a diagnosis and can reduce the chance of medical errors. As this diagram ...

  5. Creating a brief rating scale for the assessment of learning disabilities using reliability and true score estimates of the scale's items based on the Rasch model.

    Science.gov (United States)

    Sideridis, Georgios; Padeliadu, Susana

    2013-01-01

    The purpose of the present studies was to provide the means to create brief versions of instruments that can aid the diagnosis and classification of students with learning disabilities and comorbid disorders (e.g., attention-deficit/hyperactivity disorder). A sample of 1,108 students with and without a diagnosis of learning disabilities took part in study 1. Using information from modern theory methods (i.e., the Rasch model), a scale was created that included fewer than one third of the original battery items designed to assess reading skills. This best item synthesis was then evaluated for its predictive and criterion validity with a valid external reading battery (study 2). Using a sample of 232 students with and without learning disabilities, results indicated that the brief version of the scale was equally effective as the original scale in predicting reading achievement. Analysis of the content of the brief scale indicated that the best item synthesis involved items from cognition, motivation, strategy use, and advanced reading skills. It is suggested that multiple psychometric criteria be employed in evaluating the psychometric adequacy of scales used for the assessment and identification of learning disabilities and comorbid disorders.

  6. Reliability of the school AMPS measures.

    Science.gov (United States)

    Munkholm, Michaela; Löfgren, Britta; Fisher, Anne G

    2012-01-01

    The purpose of this study was to use a variety of methods to evaluate and cross-validate the reliability estimates of the quality of schoolwork performance measures in the School Version of the Assessment of Motor and Process Skills (School AMPS) (1). Split-half reliability was estimated based on School AMPS data from 6 194 students who had performed at least two school AMPS tasks. Two different split-half methods were used: splitting the School AMPS items vertically and splitting them horizontally. The Rasch equivalent of Cronbach's alpha for the full School AMPS evaluation and standard error of measurement (SEm) were also considered. All three methods yielded high reliability coefficient estimates (r ≥ 0.70) and overall SEm was low. From a practice perspective, the results of this study are of clinical importance as they provide evidence that occupational therapists can have confidence in the consistency of the School AMPS measures when they are used in the process of making decisions about individual students, planning interventions, and later performing follow-up evaluations to measure the outcome.

  7. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    Science.gov (United States)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  8. Conceptions of reliability revisited and practical recommendations

    NARCIS (Netherlands)

    Sijtsma, K.; van der Ark, L.A.

    2015-01-01

    We discuss reliability definitions from the perspectives of classical test theory, factor analysis, and generalizability theory. For each method, we discuss the rationale, the estimation of reliability, and the goodness of fit of the model that defines the reliability coefficient to the data.

  9. Methodology for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.

  10. Different Reliability Assessment Approaches for Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kramer, Morten Mejlhede; Sørensen, John Dalsgaard

    2015-01-01

    be estimated: the so-called classical reliability theory and the probabilistic reliability theory. The classical reliability theory is often used for failure rate estimations of mechanical and electrical components, whereas the probabilistic reliability theory is commonly used for structural components...

  11. Distribution Equipment Reliability Data; Tillfoerlitlighetsdata paa komponent nivaa foer eldistributionsnaet

    Energy Technology Data Exchange (ETDEWEB)

    Ying He (Vattenfall Research and Development AB, Stockholm (SE))

    2007-09-15

    In risk analysis of a power system, the risk for the system to fail power supply is calculated from the knowledge of the reliability data of individual system components. Meaningful risk analysis requires reasonable and acceptable data. The quality of the data has the fundamental importance for the analysis. However, the valid data are expensive to collect. The component reliability performance statistics are not easy to obtain. This report documents the distribution equipment reliability data developed by the project 'Component Reliability Data for Risk Analysis of Distribution Systems' within the Elforsk RandD program 'Risk Analysis 06-10'. The project analyzed a large sample size of distribution outages recorded by more than a hundred power utilities in Sweden during 2004-2005, and derived the equipment reliability data nationwide. The detailed summaries of these data are presented in the appendices of the report. The component reliability was also investigated at a number of power utilities including Vattenfall Eldistribution AB, Goeteborg Energi Naet AB, E.ON Elnaet Sverige AB, Fortum Distribution, and Linde Energi AB. The reliability data were derived for individual utilities. The detailed data lists and failure statistics are summarized in the appendices for each participating companies. The data provided in this report are developed based on a large sample size of field outage records and can be therefore used as generic data in system risk analysis and reliability studies. In order to provide more references and complementary data, the equipment reliability surveys conducted by IEEE were studied in the project. The most significant results obtained by the IEEE surveys are provided in the report. A summary of the reliability data surveyed by IEEE is presented in the appendix of the report. These data are suggested to use in the absence of better data being available. The reliability data estimates were derived for sustained failure rates

  12. Reliability analysis of an offshore structure

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Sørensen, John Dalsgaard; Rackwitz, R.

    1992-01-01

    A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittle fracture and crack through the tubular member walls. The stochastic modelling is described. The hot spot stress spectral moments as function of the stochastic...... variables are described using spline function response surfaces. A Laplace integral expansion is used to estimate the time variant reliability. Parameter studies are performed for the reliability estimates and the results of the time variant and the time invariant reliability analyses are compared. (Authors)...

  13. Reliability Analysis of an Offshore Structure

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Rackwitz, R.

    1992-01-01

    A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittie fradure and crack through the tubular roerober walls. The stochastic modeiling is described. The hot spot stress speetral moments as fundion of the stochastic...... variables are desenbed using spline fundion response surfaces. A Laplace integral expansion is used to estimate the time variant reliability. Parameter studies are performed for the reliability estimates and the results of the time variant and the time invariant reliability analyses are compared....

  14. Measuring Service Reliability Using Automatic Vehicle Location Data

    Directory of Open Access Journals (Sweden)

    Zhenliang Ma

    2014-01-01

    Full Text Available Bus service reliability has become a major concern for both operators and passengers. Buffer time measures are believed to be appropriate to approximate passengers' experienced reliability in the context of departure planning. Two issues with regard to buffer time estimation are addressed, namely, performance disaggregation and capturing passengers’ perspectives on reliability. A Gaussian mixture models based method is applied to disaggregate the performance data. Based on the mixture models distribution, a reliability buffer time (RBT measure is proposed from passengers’ perspective. A set of expected reliability buffer time measures is developed for operators by using different spatial-temporal levels combinations of RBTs. The average and the latest trip duration measures are proposed for passengers that can be used to choose a service mode and determine the departure time. Using empirical data from the automatic vehicle location system in Brisbane, Australia, the existence of mixture service states is verified and the advantage of mixture distribution model in fitting travel time profile is demonstrated. Numerical experiments validate that the proposed reliability measure is capable of quantifying service reliability consistently, while the conventional ones may provide inconsistent results. Potential applications for operators and passengers are also illustrated, including reliability improvement and trip planning.

  15. Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine: An overview

    Science.gov (United States)

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon; Prasad, Ramjee

    2013-01-01

    Recently, a need to develop supportive new scientific evidence for contemporary Ayurveda has emerged. One of the research objectives is an assessment of the reliability of diagnoses and treatment. Reliability is a quantitative measure of consistency. It is a crucial issue in classification (such as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine diagnoses is in the formative stage. However, reliability studies in Ayurveda are in the preliminary stage. In this paper, examples are provided to illustrate relevant concepts of reliability studies of diagnostic methods and their implication in practice, education, and training. An introduction to reliability estimates and different study designs and statistical analysis is given for future studies in Ayurveda. PMID:23930037

  16. Test Reliability at the Individual Level

    OpenAIRE

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coeffici...

  17. SYSTEM ORGANIZATION OF MATERIAL PROVIDING OF BUILDING

    Directory of Open Access Journals (Sweden)

    A. V. Rаdkеvich

    2014-04-01

    Full Text Available Purpose. Development of scientific-methodical bases to the design of rational management of material streams in the field of building providing taking into account intersystem connections with the enterprises of building industry. Methodology. The analysis of last few years of functioning of building industry in Ukraine allows distinguishing a number of problems that negatively influence the steady development of building, as the component of the state economics system. Therefore the research of existent organization methods of the system of building objects providing with material resources is extremely necessary. In connection with this the article justifies the use of method of hierarchies analysis (Saati method for finding the optimal task solution of fixing the enterprises of building industry after building objects. Findings. Results give an opportunity to guidance of building organization to estimate and choose advantageous suppliers - enterprises of building industry, to conduct their rating, estimation taking into account basic descriptions, such as: quality, price, reliability of deliveries, specialization, financial status etc. Originality. On the basis of Saati method the methodologies of organization are improved, planning and managements of the reliable system of providing of building necessary material resources that meet the technological requirements of implementation of building and installation works. Practical value. Contribution to the decisions of many intricate organizational problems that are accompanied by the problems of development of building, provided due to organization of the reliable system of purchase of material resources.

  18. Reliability-Based Optimization in Structural Engineering

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1994-01-01

    In this paper reliability-based optimization problems in structural engineering are formulated on the basis of the classical decision theory. Several formulations are presented: Reliability-based optimal design of structural systems with component or systems reliability constraints, reliability......-based optimal inspection planning and reliability-based experiment planning. It is explained how these optimization problems can be solved by application of similar techniques. The reliability estimation is limited to first order reliability methods (FORM) for both component and systems reliability evaluation....... The solution strategies applying first order non-linear optimization algorithms are described in detail with special attention to sensitivity analysis and stability of the optimization process. Furthermore, several practical aspects are treated as: Development of the reliability-based optimization model...

  19. The Need to Provide for Security in Old Age in Hierarchy of Needs-An Estimation of Its Ranking within the Polish Population

    Science.gov (United States)

    Roszkiewicz, Malgorzata

    2004-01-01

    The results of studies conducted in the last 5 years in Poland formed the basis for the assumption that amongst many needs an individual or a Polish household seeks to satisfy, the need to provide for security in old age takes a prominent position. Determining the position of this need among other needs as defined in Schrab's classification…

  20. INCREASING RELIABILITY OF INDEXING SUSTAINABLE DEVELOPMENT USING MULTIVARIANCE: CALCULATION AND MAPPING

    Directory of Open Access Journals (Sweden)

    V. S. Tikunov

    2016-01-01

    Full Text Available This article provides an overview of existing techniques used to improve the reliability of classification. In particular, it focuses on the multivariance as a simple and reliable method of increasing the reliability of estimates. The use of multivariance aythorth consider on an example of the Index of the ecological situation of Russian Regions, that has been designed as part of complex estimation of sustainable development. The index includes three indicators: damage caused by air pollution, amount of discharged polluted wastewater and environmental protection expenditures.

  1. In-vivo studies of new vector velocity and adaptive spectral estimators in medical ultrasound

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov

    2010-01-01

    New ultrasound techniques for blood flow estimation have been investigated in-vivo. These are vector velocity estimators (Transverse Oscillation, Synthetic Transmit Aperture, Directional Beamforming and Plane Wave Excitation) and adaptive spectral estimators (Blood spectral Power Capon and Blood...... Amplitude and Phase Estimation). It was shown that the vector velocity estimators can provide reliable angle independent estimates and new insight to the complexity of blood flow, and that adaptive spectral estimators can produce useful spectrograms faster than the conventional spectral Doppler method....

  2. Asymptotically Distribution-Free (ADF) Interval Estimation of Coefficient Alpha

    Science.gov (United States)

    Maydeu-Olivares, Alberto; Coffman, Donna L.; Hartmann, Wolfgang M.

    2007-01-01

    The point estimate of sample coefficient alpha may provide a misleading impression of the reliability of the test score. Because sample coefficient alpha is consistently biased downward, it is more likely to yield a misleading impression of poor reliability. The magnitude of the bias is greatest precisely when the variability of sample alpha is…

  3. OREDA offshore and onshore reliability data volume 1 - topside equipment

    CERN Document Server

    OREDA

    2015-01-01

    This handbook presents high quality reliability data for offshore equipment collected during phase VI to IX (project period 2000 – 2009) of the OREDA project. The intention of the handbook is to provide both quantitative and qualitative information as a basis for Performance Forecasting or RAMS (Reliability, Availability, Maintainability and Safety) analyses. Volume 1 is about Topside Equipment. Compared to earlier editions, there are only minor changes in the reliability data presentation. To obtain a reasonable population for presenting reliability data for topside equipment in the 2015 edition, some data from phases VI and VII already issued in the previous 2009 handbook (5th edition) have also been included. The 2015 topside volume is divided into two parts. Part I describes the OREDA project, different data collection phases and the estimation procedures used to generate the data tables presented in Part II of the handbook. Topside data are in general not covering the whole lifetime of equipment, but ...

  4. Neurology objective structured clinical examination reliability using generalizability theory.

    Science.gov (United States)

    Blood, Angela D; Park, Yoon Soo; Lukas, Rimas V; Brorson, James R

    2015-11-03

    This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. © 2015 American Academy of Neurology.

  5. Is gait variability reliable in older adults and Parkinson's disease? Towards an optimal testing protocol.

    Science.gov (United States)

    Galna, Brook; Lord, Sue; Rochester, Lynn

    2013-04-01

    Despite the widespread use of gait variability in research and clinical studies, testing protocols designed to optimise its reliability have not been established. This study evaluates the impact of testing protocol and pathology on the reliability of gait variability. To (i) estimate the reliability of gait variability during continuous and intermittent walking protocols in older adults and people with Parkinson's disease (PD), (ii) determine optimal number of steps for acceptable levels of reliability of gait variability and (iii) provide sample size estimates for use in clinical trials. Gait variability was measured twice, one week apart, in 27 older adults and 25 PD participants. Participants walked at their preferred pace during: (i) a continuous 2 min walk and (ii) 3 intermittent walks over a 12 m walkway. Gait variability was calculated as the within-person standard deviation for step velocity, length and width, and step, stance and swing duration. Reliability of gait variability ranged from poor to excellent (intra class correlations .041-.860; relative limits of agreement 34-89%). Gait variability was more reliable during continuous walks. Control and PD participants demonstrated similar reliability. Increasing the number of steps improved reliability, with most improvement seen across the first 30 steps. In this study, we identified testing protocols that improve the reliability of measuring gait variability. We recommend using a continuous walking protocol and to collect no fewer than 30 steps. Early PD does not appear to impact negatively on the reliability of gait variability. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Estimating systemic fibrosis by combining galectin-3 and ST2 provides powerful risk stratification value for patients after acute decompensated heart failure.

    Science.gov (United States)

    Wang, Chao-Hung; Yang, Ning-I; Liu, Min-Hui; Hsu, Kuang-Hung; Kuo, Li-Tang

    2016-01-01

    Two fibrosis biomarkers, galectin-3 (Gal-3) and suppression of tumorigenicity 2 (ST2), provide prognostic value additive to natriuretic peptides and traditional risk factors in patients with heart failure (HF). However, it is to be investigated whether their combined measurement before discharge provides incremental risk stratification for patients after acute HF. A total of 344 patients with acute HF were analyzed with Gal-3, and ST2 measured. Patients were prospectively followed for 3.7 ± 1.3 years for deaths, and composite events (death/HF-related re-hospitalizations). The levels of Gal-3 and ST2 were only slightly related (r = 0.20, p risk factors. According to the cutoff at median values, patients were separated into four subgroups based on high and low Gal-3 (HG and LG, respectively) and ST2 levels (HS and LS, respectively). Kaplan-Meier survival curves showed that HGHS powerfully identified patients at risk of mortality (Log rank = 21.27, p risk stratification value.

  7. Using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) to characterize copper, zinc and mercury along grizzly bear hair providing estimate of diet

    Energy Technology Data Exchange (ETDEWEB)

    Noël, Marie, E-mail: marie.noel@stantec.com [Stantec Consulting Ltd. 2042 Mills Road, Unit 11, Sidney BC V8L 4X2 (Canada); Christensen, Jennie R., E-mail: jennie.christensen@stantec.com [Stantec Consulting Ltd. 2042 Mills Road, Unit 11, Sidney BC V8L 4X2 (Canada); Spence, Jody, E-mail: jodys@uvic.ca [School of Earth and Ocean Sciences, Bob Wright Centre A405, University of Victoria, PO BOX 3065 STN CSC, Victoria, BC V8W 3V6 (Canada); Robbins, Charles T., E-mail: ctrobbins@wsu.edu [School of the Environment and School of Biological Sciences, Washington State University, Pullman, WA 99164-4236 (United States)

    2015-10-01

    We enhanced an existing technique, laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), to function as a non-lethal tool in the temporal characterization of trace element exposure in wild mammals. Mercury (Hg), copper (Cu), cadmium (Cd), lead (Pb), iron (Fe) and zinc (Zn) were analyzed along the hair of captive and wild grizzly bears (Ursus arctos horribilis). Laser parameters were optimized (consecutive 2000 μm line scans along the middle line of the hair at a speed of 50 μm/s; spot size = 30 μm) for consistent ablation of the hair. A pressed pellet of reference material DOLT-2 and sulfur were used as external and internal standards, respectively. Our newly adapted method passed the quality control tests with strong correlations between trace element concentrations obtained using LA-ICP-MS and those obtained with regular solution-ICP-MS (r{sup 2} = 0.92, 0.98, 0.63, 0.57, 0.99 and 0.90 for Hg, Fe, Cu, Zn, Cd and Pb, respectively). Cross-correlation analyses revealed good reproducibility between trace element patterns obtained from hair collected from the same bear. One exception was Cd for which external contamination was observed resulting in poor reproducibility. In order to validate the method, we used LA-ICP-MS on the hair of five captive grizzly bears fed known and varying amounts of cutthroat trout over a period of 33 days. Trace element patterns along the hair revealed strong Hg, Cu and Zn signals coinciding with fish consumption. Accordingly, significant correlations between Hg, Cu, and Zn in the hair and Hg, Cu, and Zn intake were evident and we were able to develop accumulation models for each of these elements. While the use of LA-ICP-MS for the monitoring of trace elements in wildlife is in its infancy, this study highlights the robustness and applicability of this newly adapted method. - Highlights: • LA-ICP-MS provides temporal trace metal exposure information for wild grizzly bears. • Cu and Zn temporal exposures provide

  8. Binding free energy predictions of farnesoid X receptor (FXR) agonists using a linear interaction energy (LIE) approach with reliability estimation: application to the D3R Grand Challenge 2

    Science.gov (United States)

    Rifai, Eko Aditya; van Dijk, Marc; Vermeulen, Nico P. E.; Geerke, Daan P.

    2017-09-01

    Computational protein binding affinity prediction can play an important role in drug research but performing efficient and accurate binding free energy calculations is still challenging. In the context of phase 2 of the Drug Design Data Resource (D3R) Grand Challenge 2 we used our automated eTOX ALLIES approach to apply the (iterative) linear interaction energy (LIE) method and we evaluated its performance in predicting binding affinities for farnesoid X receptor (FXR) agonists. Efficiency was obtained by our pre-calibrated LIE models and molecular dynamics (MD) simulations at the nanosecond scale, while predictive accuracy was obtained for a small subset of compounds. Using our recently introduced reliability estimation metrics, we could classify predictions with higher confidence by featuring an applicability domain (AD) analysis in combination with protein-ligand interaction profiling. The outcomes of and agreement between our AD and interaction-profile analyses to distinguish and rationalize the performance of our predictions highlighted the relevance of sufficiently exploring protein-ligand interactions during training and it demonstrated the possibility to quantitatively and efficiently evaluate if this is achieved by using simulation data only.

  9. Comparing Top-down and Bottom-up Estimates of Methane Emissions across Multiple U.S. Basins Provides Insights into National Oil and Gas Emissions and Mitigation Strategies

    Science.gov (United States)

    Hamburg, S.; Alvarez, R.; Lyon, D. R.; Zavala-Araiza, D.

    2016-12-01

    Several recent studies quantified regional methane emissions in U.S. oil and gas (O&G) basins using top-down approaches such as airborne mass balance measurements. These studies apportioned total methane emissions to O&G based on hydrocarbon ratios or subtracting bottom-up estimates of other sources. In most studies, top-down estimates of O&G methane emissions exceeded bottom-up emission inventories. An exception is the Barnett Shale Coordinated Campaign, which found agreement between aircraft mass balance estimates and a custom emission inventory. Reconciliation of Barnett Shale O&G emissions depended on two key features: 1) matching the spatial domains of top-down and bottom-up estimates, and 2) accounting for fat-tail sources in site-level emission factors. We construct spatially explicit custom emission inventories for domains with top-down O&G emission estimates in eight major U.S. oil and gas production basins using a variety of data sources including a spatially-allocated U.S. EPA Greenhouse Gas Inventory, the EPA Greenhouse Gas Reporting Program, state emission inventories, and recently published measurement studies. A comparison of top-down and our bottom-up estimates of O&G emissions constrains the gap between these approaches and elucidates regional variability in production-normalized loss rates. A comparison of component-level and site-level emission estimates of production sites in the Barnett Shale region - where comprehensive activity data and emissions estimates are available - indicates that abnormal process conditions contribute about 20% of regional O&G emissions. Combining these two analyses provides insights into the relative importance of different equipment, processes, and malfunctions to emissions in each basin. These data allow us to estimate the U.S. O&G supply chain loss rate, recommend mitigation strategies to reduce emissions from existing infrastructure, and discuss how a similar approach can be applied internationally.

  10. Reliability and construction control

    Directory of Open Access Journals (Sweden)

    Sherif S. AbdelSalam

    2016-06-01

    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  11. Fatigue Reliability under Random Loads

    DEFF Research Database (Denmark)

    Talreja, R.

    1979-01-01

    We consider the problem of estimating the probability of survival (non-failure) and the probability of safe operation (strength greater than a limiting value) of structures subjected to random loads. These probabilities are formulated in terms of the probability distributions of the loads and the...... propagation stage. The consequences of this behaviour on the fatigue reliability are discussed....

  12. Estimating the Costs and Benefits of Providing Free Public Transit Passes to Students in Los Angeles County: Lessons Learned in Applying a Health Lens to Decision-Making

    Directory of Open Access Journals (Sweden)

    Lauren N. Gase

    2014-10-01

    Full Text Available In spite of increased focus by public health to engage and work with non-health sector partners to improve the health of the general as well as special populations, only a paucity of studies have described and disseminated emerging lessons and promising practices that can be used to undertake this work. This article describes the process used to conduct a Health Impact Assessment of a proposal to provide free public transportation passes to students in Los Angeles County. This illustrative case example describes opportunities and challenges encountered in working with an array of cross-sector partners and highlights four important lessons learned: (1 the benefits and challenges associated with broad conceptualization of public issues; (2 the need for more comprehensive, longitudinal data systems and dynamic simulation models to inform decision-making; (3 the importance of having a comprehensive policy assessment strategy that considers health impacts as well as costs and feasibility; and (4 the need for additional efforts to delineate the interconnectivity between health and other agency priorities. As public health advances cross-sector work in the community, further development of these priorities will help advance meaningful collaboration among all partners.

  13. Medicare Provider Data - Hospice Providers

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Hospice Utilization and Payment Public Use File provides information on services provided to Medicare beneficiaries by hospice providers. The Hospice PUF...

  14. Exponentiated Weibull distribution approach based inflection S-shaped software reliability growth model

    Directory of Open Access Journals (Sweden)

    B.B. Sagar

    2016-09-01

    Full Text Available The aim of this paper was to estimate the number of defects in software and remove them successfully. This paper incorporates Weibull distribution approach along with inflection S-shaped Software Reliability Growth Models (SRGM. In this combination two parameter Weibull distribution methodology is used. Relative Prediction Error (RPE is calculated to predict the validity criterion of the developed model. Experimental results on actual data from five data sets are compared with two other existing models, which expose that the proposed software reliability growth model predicts better estimation to remove the defects. This paper presents best software reliability growth model with including feature of both Weibull distribution and inflection S-shaped SRGM to estimate the defects of software system, and provide help to researchers and software industries to develop highly reliable software products.

  15. Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients

    Science.gov (United States)

    Andersson, Björn; Xin, Tao

    2018-01-01

    In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…

  16. Increasing reliability of Gauss-Kronrod quadrature by Eratosthenes' sieve method

    Science.gov (United States)

    Adam, Gh.; Adam, S.

    2001-04-01

    The reliability of the local error estimates returned by the Gauss-Kronrod quadrature rules can be raised up to the theoretical 100% rate of success, under error estimate sharpening, provided a number of natural validating conditions are required. The self-validating scheme of the local error estimates, which is easy to implement and adds little supplementary computing effort, strengthens considerably the correctness of the decisions within the automatic adaptive quadrature.

  17. Direct volume estimation without segmentation

    Science.gov (United States)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  18. Statistical Primer for Athletic Trainers: The Essentials of Understanding Measures of Reliability and Minimal Important Change.

    Science.gov (United States)

    Riemann, Bryan L; Lininger, Monica R

    2018-01-01

      To describe the concepts of measurement reliability and minimal important change.   All measurements have some magnitude of error. Because clinical practice involves measurement, clinicians need to understand measurement reliability. The reliability of an instrument is integral in determining if a change in patient status is meaningful.   Measurement reliability is the extent to which a test result is consistent and free of error. Three perspectives of reliability-relative reliability, systematic bias, and absolute reliability-are often reported. However, absolute reliability statistics, such as the minimal detectable difference, are most relevant to clinicians because they provide an expected error estimate. The minimal important difference is the smallest change in a treatment outcome that the patient would identify as important.   Clinicians should use absolute reliability characteristics, preferably the minimal detectable difference, to determine the extent of error around a patient's measurement. The minimal detectable difference, coupled with an appropriately estimated minimal important difference, can assist the practitioner in identifying clinically meaningful changes in patients.

  19. Software Architecture Reliability Analysis using Failure Scenarios

    NARCIS (Netherlands)

    Tekinerdogan, B.; Sözer, Hasan; Aksit, Mehmet

    2005-01-01

    We propose a Software Architecture Reliability Analysis (SARA) approach that benefits from both reliability engineering and scenario-based software architecture analysis to provide an early reliability analysis of the software architecture. SARA makes use of failure scenarios that are prioritized

  20. Modeling Reliability Growth in Accelerated Stress Testing

    Science.gov (United States)

    2013-12-01

    and E. Elsayed, "A general accelerated life model for step stress testing," IIE Transactions , vol. 37, no. 11, pp. 1059-1069, 2005. [57] D. Nicholls...W. Nelson, "Accelerated Life Testing - step-stress models and data analyses," IEEE Transactions on Reliability, vol. 29, pp. 103-108, 1980. [19] G...34 IEEE Transactions on Reliability, Vols. R-26, no. 5, pp. 348-351, 1977. [35] D. E. Olsen, "Estimating reliability growth," IEEE Transactions on

  1. Scale Reliability Evaluation with Heterogeneous Populations

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…

  2. Lifetime Reliability Assessment of Concrete Slab Bridges

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    A procedure for lifetime assesment of the reliability of short concrete slab bridges is presented in the paper. Corrosion of the reinforcement is the deterioration mechanism used for estimating the reliability profiles for such bridges. The importance of using sensitivity measures is stressed...

  3. Study on segmented distribution for reliability evaluation

    Directory of Open Access Journals (Sweden)

    Huaiyuan Li

    2017-02-01

    Full Text Available In practice, the failure rate of most equipment exhibits different tendencies at different stages and even its failure rate curve behaves a multimodal trace during its life cycle. As a result, traditionally evaluating the reliability of equipment with a single model may lead to severer errors. However, if lifetime is divided into several different intervals according to the characteristics of its failure rate, piecewise fitting can more accurately approximate the failure rate of equipment. Therefore, in this paper, failure rate is regarded as a piecewise function, and two kinds of segmented distribution are put forward to evaluate reliability. In order to estimate parameters in the segmented reliability function, Bayesian estimation and maximum likelihood estimation (MLE of the segmented distribution are discussed in this paper. Since traditional information criterion is not suitable for the segmented distribution, an improved information criterion is proposed to test and evaluate the segmented reliability model in this paper. After a great deal of testing and verification, the segmented reliability model and its estimation methods presented in this paper are proven more efficient and accurate than the traditional non-segmented single model, especially when the change of the failure rate is time-phased or multimodal. The significant performance of the segmented reliability model in evaluating reliability of proximity sensors of leading-edge flap in civil aircraft indicates that the segmented distribution and its estimation method in this paper could be useful and accurate.

  4. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...

  5. Performance and Risk Assessment of Soil-Structure Interaction Systems Based on Finite Element Reliability Methods

    Directory of Open Access Journals (Sweden)

    Quan Gu

    2014-01-01

    Full Text Available In the context of performance-based earthquake engineering, reliability method has been of significant importance in performance and risk assessment of structures or soil-structure interaction (SSI systems. The finite element (FE reliability method combines FE analysis with state-of-the-art methods in reliability analysis and has been employed increasingly to estimate the probability of occurrence of failure events corresponding to various hazard levels (e.g., earthquakes with various intensity. In this paper, crucial components for FE reliability analysis are reviewed and summarized. Furthermore, recent advances in both time invariant and time variant reliability analysis methods for realistic nonlinear SSI systems are presented and applied to a two-dimensional two story building on layered soil. Various time invariant reliability analysis methods are applied, including the first-order reliability method (FORM, importance sampling method, and orthogonal plane sampling (OPS method. For time variant reliability analysis, an upper bound of the failure probability is obtained from numerical integration of the mean outcrossing rate (MOCR. The MOCR is computed by using FORM analysis and OPS analysis. Results by different FE reliability methods are compared in terms of accuracy and computational cost. This paper provides valuable insights for reliability based probabilistic performance and risk assessment of SSI systems.

  6. Transit Reliability Information Program : Reliability Verification Demonstration Plan for Rapid Rail Vehicles

    Science.gov (United States)

    1981-08-01

    The Transit Reliability Information Program (TRIP) is a government-initiated program to assist the transit industry in satisfying its need for transit reliability information. TRIP provides this assistance through the operation of a national Data Ban...

  7. Reliability analysis in intelligent machines

    Science.gov (United States)

    Mcinroy, John E.; Saridis, George N.

    1990-01-01

    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  8. Nonspecialist Raters Can Provide Reliable Assessments of Procedural Skills

    DEFF Research Database (Denmark)

    Mahmood, Oria; Dagnæs, Julia; Bube, Sarah

    2018-01-01

    skills which is difficult owing to the limited availability of faculty time. The aim of this study was to explore the validity of assessments of video recorded procedures performed by nonspecialist raters. METHODS: This study was a blinded observational trial. Twenty-three novices (senior medical...

  9. Assessment of Advanced Life Support competence when combining different test methods--reliability and validity

    DEFF Research Database (Denmark)

    Ringsted, C; Lippert, F; Hesselfeldt, R

    2007-01-01

    Robust assessment of Advanced Life Support (ALS) competence is paramount to the credibility of ALS-provider certification and for estimating the learning outcome and retention of ALS competence following the courses. The European Resuscitation Council (ERC) provides two sets of MCQs and four...... Cardiac Arrest Simulation Test (CASTest) scenarios for the assessments according to guidelines 2005. AIMS: To analyse the reliability and validity of the individual sub-tests provided by ERC and to find a combination of MCQ and CASTest that provides a reliable and valid single effect measure of ALS...

  10. Human Reliability Program Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  11. Improving Software Reliability Forecasting

    NARCIS (Netherlands)

    Burtsy, Bernard; Albeanu, Grigore; Boros, Dragos N.; Popentiu, Florin; Nicola, V.F.

    1996-01-01

    This work investigates some methods for software reliability forecasting. A supermodel is presented as a suited tool for prediction of reliability in software project development. Also, times series forecasting for cumulative interfailure time is proposed and illustrated.

  12. Reliable Design Versus Trust

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  13. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  14. Integrated system reliability analysis

    DEFF Research Database (Denmark)

    Gintautas, Tomas; Sørensen, John Dalsgaard

    Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...... (indicators) that can be used to assess the reliability of innovations and new technologies....

  15. Power system reliability

    Energy Technology Data Exchange (ETDEWEB)

    Allan, R.; Billinton, Roy (Manchester Univ. (United Kingdom). Inst. of Science and Technology Saskatchewan Univ., Saskatoon, SK (Canada))

    1994-01-01

    The function of an electric power system is to satisfy the system load as economically as possible and with a reasonable assurance of continuity or reliability. The application of quantitative reliability techniques in planning and operation has increased considerably in the past few years. Reliability evaluation is now becoming an integral part of the economic comparison of alternatives (6 figures, 17 references) (Author)

  16. Mass and Reliability System (MaRS)

    Science.gov (United States)

    Barnes, Sarah

    2016-01-01

    The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including

  17. Energy based reliable multicast routing protocol for packet forwarding in MANET

    Directory of Open Access Journals (Sweden)

    S. Gopinath

    2015-06-01

    Full Text Available Mobile Ad hoc Network consists of mobile nodes without any assisting infrastructure. Mobility of nodes causes network partition. This leads to heavy overhead and less packet forwarding ratio. In this research work, Residual Energy based Reliable Multicast Routing Protocol (RERMR is proposed to attain more network lifetime and increased packet delivery and forwarding rate. A multicast backbone is constructed to achieve more stability based on node familiarity and trustable loop. Reliable path criterion is estimated to choose best reliable path among all available paths. Data packets will be forwarded once the reliable path is chosen. We have also demonstrated that residual energy of paths aids to provide maximum network lifetime. Based on the simulation results, the proposed work achieves better performance than previous protocols in terms of packet reliability rate, network stability rate, end to end delay, end to end transmission and communication overhead.

  18. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement...

  19. Fatigue Reliability Analysis of a Mono-Tower Platform

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    1991-01-01

    In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed...... through linear-elastic fracture mechanics (LEFM). In determining the cumulative fatigue damage, Palmgren-Miner's rule is applied. Element reliability, as well as systems reliability, is estimated using first-order reliability methods (FORM). The sensitivity of the systems reliability to various parameters...... is investigated. The systems reliability index, estimated by using the fatigue elements with the fatigue strength expressed through SN relations, is found to be smaller than the systems reliability index estimated by using LEFM. It is shown that the systems reliability index is very sensitive to variations...

  20. System and Software Reliability (C103)

    Science.gov (United States)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  1. Differential reliability : probabilistic engineering applied to wood members in bending-tension

    Science.gov (United States)

    Stanley K. Suddarth; Frank E. Woeste; William L. Galligan

    1978-01-01

    Reliability analysis is a mathematical technique for appraising the design and materials of engineered structures to provide a quantitative estimate of probability of failure. Two or more cases which are similar in all respects but one may be analyzed by this method; the contrast between the probabilities of failure for these cases allows strong analytical focus on the...

  2. Transferring model uncertainty estimates from gauged to ungauged catchments

    Science.gov (United States)

    Bourgin, F.; Andréassian, V.; Perrin, C.; Oudin, L.

    2014-07-01

    Predicting streamflow hydrographs in ungauged catchments is a challenging issue, and accompanying the estimates with realistic uncertainty bounds is an even more complex task. In this paper, we present a method to transfer model uncertainty estimates from gauged to ungauged catchments and we test it over a set of 907 catchments located in France. We evaluate the quality of the uncertainty estimates based on three expected qualities: reliability, sharpness, and overall skill. Our results show that the method holds interesting perspectives, providing in most cases reliable and sharp uncertainty bounds at ungauged locations.

  3. Analysis of the Reliability of the "Alternator- Alternator Belt" System

    Directory of Open Access Journals (Sweden)

    Ivan Mavrin

    2012-10-01

    Full Text Available Before starting and also during the exploitation of va1ioussystems, it is vety imp011ant to know how the system and itsparts will behave during operation regarding breakdowns, i.e.failures. It is possible to predict the service behaviour of a systemby determining the functions of reliability, as well as frequencyand intensity of failures.The paper considers the theoretical basics of the functionsof reliability, frequency and intensity of failures for the twomain approaches. One includes 6 equal intetvals and the other13 unequal intetvals for the concrete case taken from practice.The reliability of the "alternator- alternator belt" system installedin the buses, has been analysed, according to the empiricaldata on failures.The empitical data on failures provide empirical functionsof reliability and frequency and intensity of failures, that arepresented in tables and graphically. The first analysis perfO!med by dividing the mean time between failures into 6 equaltime intervals has given the forms of empirical functions of fa ilurefrequency and intensity that approximately cotTespond totypical functions. By dividing the failure phase into 13 unequalintetvals with two failures in each interval, these functions indicateexplicit transitions from early failure inte1val into the randomfailure interval, i.e. into the ageing intetval. Functions thusobtained are more accurate and represent a better solution forthe given case.In order to estimate reliability of these systems with greateraccuracy, a greater number of failures needs to be analysed.

  4. Myers-Briggs Type Indicator Score Reliability across Studies: A Meta-Analytic Reliability.

    Science.gov (United States)

    Capraro, Robert M.; Capraro, Mary Margaret

    2002-01-01

    Submitted the Myers-Briggs Type Indicator (MBTI) to a descriptive reliability generalization analysis to characterize the variability of measurement error in MBTI scores across administrations. In general the MBTI and its scales yielded scores with strong internal consistency and test-retest reliability estimates. (SLD)

  5. Reliability in the design phase

    Energy Technology Data Exchange (ETDEWEB)

    Siahpush, A.S.; Hills, S.W.; Pham, H. (EG and G Idaho, Inc., Idaho Falls, ID (United States)); Majumdar, D. (USDOE Idaho Field Office, Idaho Falls, ID (United States))

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system's reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

  6. Reliability in the design phase

    Energy Technology Data Exchange (ETDEWEB)

    Siahpush, A.S.; Hills, S.W.; Pham, H. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Majumdar, D. [USDOE Idaho Field Office, Idaho Falls, ID (United States)

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system`s reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

  7. Optimizing Probability of Detection Point Estimate Demonstration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  8. Reliability of surface facial electromyography.

    Science.gov (United States)

    Hess, Ursula; Arslan, Ruben; Mauersberger, Heidi; Blaison, Christophe; Dufner, Michael; Denissen, Jaap J A; Ziegler, Matthias

    2017-01-01

    Data from two studies were used to estimate the reliability of facial EMG when used to index facial mimicry (Study 1) or affective reactions to pictorial stimuli (Study 2). Results for individual muscle sites varied between muscles and depending on data treatment. For difference scores, acceptable internal consistencies were found only for corrugator supercilii, and test-retest reliabilities were low. For contrast measures describing patterns of reactions to stimuli, such as high zygomaticus major combined with low corrugator supercilii, acceptable internal consistencies were found for facial reactions to smiling faces and positive affective reactions to affiliative images (Study 2). Facial reactions to negative emotions (Study 1) and facial reactions to power and somewhat less to achievement imagery (Study 2) showed unsatisfactory internal consistencies. For contrast measures, good temporal stability over 24 months (Study 1) and 15 months (Study 2), respectively, was obtained. In Study 1, the effect of method factors such as mode of presentation was more reliable than the emotion effect. Overall, people's facial reactions to affective stimuli seem to be influenced by a variety of factors other than the emotion-eliciting element per se, which resulted in biased internal consistency estimates. However, the influence of these factors in turn seemed to be stable over time. © 2016 Society for Psychophysiological Research.

  9. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  10. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...

  11. Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Grabaskas, D.; Brunett, A.; Passerini, S.; Grelle, A.; Bucknor, M.

    2017-06-26

    Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. The mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.

  12. Reliable computation from contextual correlations

    Science.gov (United States)

    Oestereich, André L.; Galvão, Ernesto F.

    2017-12-01

    An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.

  13. Measurement System Reliability Assessment

    Directory of Open Access Journals (Sweden)

    Kłos Ryszard

    2015-06-01

    Full Text Available Decision-making in problem situations is based on up-to-date and reliable information. A great deal of information is subject to rapid changes, hence it may be outdated or manipulated and enforce erroneous decisions. It is crucial to have the possibility to assess the obtained information. In order to ensure its reliability it is best to obtain it with an own measurement process. In such a case, conducting assessment of measurement system reliability seems to be crucial. The article describes general approach to assessing reliability of measurement systems.

  14. Reliability assurance for regulation of advanced reactors

    Energy Technology Data Exchange (ETDEWEB)

    Fullwood, R.; Lofaro, R.; Samanta, P.

    1991-12-31

    The advanced nuclear power plants must achieve higher levels of safety than the first generation of plants. Showing that this is indeed true provides new challenges to reliability and risk assessment methods in the analysis of the designs employing passive and semi-passive protection. Reliability assurance of the advanced reactor systems is important for determining the safety of the design and for determining the plant operability. Safety is the primary concern, but operability is considered indicative of good and safe operation. This paper discusses several concerns for reliability assurance of the advanced design encompassing reliability determination, level of detail required in advanced reactor submittals, data for reliability assurance, systems interactions and common cause effects, passive component reliability, PRA-based configuration control system, and inspection, training, maintenance and test requirements. Suggested approaches are provided for addressing each of these topics.

  15. The Reliability of Difference Scores in Populations and Samples

    Science.gov (United States)

    Zimmerman, Donald W.

    2009-01-01

    This study was an investigation of the relation between the reliability of difference scores, considered as a parameter characterizing a population of examinees, and the reliability estimates obtained from random samples from the population. The parameters in familiar equations for the reliability of difference scores were redefined in such a way…

  16. Evaluation of reliability worth in an electric power system

    Energy Technology Data Exchange (ETDEWEB)

    Billinton, Roy [Saskatchewan Univ., Saskatoon, SK (Canada). Power System Research Group

    1994-12-31

    This paper illustrates the application of basic power system reliability evaluation techniques to the quantification for reliability worth. The approach presented links customer interruption cost estimates with predictable indices of power system reliability. The technique is illustrated by application in the areas of generation, composite generation and transmission, and distribution system assessment using a hypothetical test system. (author)

  17. Reliability analysis of the combined district heating systems

    Science.gov (United States)

    Sharapov, V. I.; Orlov, M. E.; Kunin, M. V.

    2015-12-01

    Technologies that improve the reliability and efficiency of the combined district heating systems in urban areas are considered. The calculation method of reliability of the CHP combined district heating systems is proposed. The comparative estimation of the reliability of traditional and combined district heating systems is performed.

  18. MOV reliability evaluation and periodic verification scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Bunte, B.D.

    1996-12-01

    The purpose of this paper is to establish a periodic verification testing schedule based on the expected long term reliability of gate or globe motor operated valves (MOVs). The methodology in this position paper determines the nominal (best estimate) design margin for any MOV based on the best available information pertaining to the MOVs design requirements, design parameters, existing hardware design, and present setup. The uncertainty in this margin is then determined using statistical means. By comparing the nominal margin to the uncertainty, the reliability of the MOV is estimated. The methodology is appropriate for evaluating the reliability of MOVs in the GL 89-10 program. It may be used following periodic testing to evaluate and trend MOV performance and reliability. It may also be used to evaluate the impact of proposed modifications and maintenance activities such as packing adjustments. In addition, it may be used to assess the impact of new information of a generic nature which impacts safety related MOVs.

  19. A Measure for the Reliability of a Rating Scale Based on Longitudinal Clinical Trial Data

    Science.gov (United States)

    Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert

    2007-01-01

    A new measure for reliability of a rating scale is introduced, based on the classical definition of reliability, as the ratio of the true score variance and the total variance. Clinical trial data can be employed to estimate the reliability of the scale in use, whenever repeated measurements are taken. The reliability is estimated from the…

  20. Applying reliability models to the maintenance of Space Shuttle software

    Science.gov (United States)

    Schneidewind, Norman F.

    1992-01-01

    Software reliability models provide the software manager with a powerful tool for predicting, controlling, and assessing the reliability of software during maintenance. We show how a reliability model can be effectively employed for reliability prediction and the development of maintenance strategies using the Space Shuttle Primary Avionics Software Subsystem as an example.

  1. Reliability Considerations for the Operation of Large Accelerator User Facilities

    CERN Document Server

    Willeke, F.J.

    2016-01-01

    The lecture provides an overview of considerations relevant for achieving highly reliable operation of accelerator based user facilities. The article starts with an overview of statistical reliability formalism which is followed by high reliability design considerations with examples. The article closes with operational aspects of high reliability such as preventive maintenance and spares inventory.

  2. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...

  3. Hawaii Electric System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  4. Hawaii electric system reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  5. LED system reliability

    NARCIS (Netherlands)

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.

    2011-01-01

    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific

  6. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  7. What the Customer Wants. Maintenance-Free and Failure-Free Operating Periods to Improve Overall System Availability and Reliability

    Science.gov (United States)

    2000-09-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010429 TITLE: What the Customer Wants. Maintenance-Free and Failure-Free... THE CUSTOMER WANTS MAINTENANCE-FREE AND FAILURE-FREE OPERATING PERIODS TO IMPROVE OVERALL SYSTEM AVAILABILITY AND RELIABILITY SQUADRON LEADER P MITCHELL...form part of a proposal from an equipment supplier. product reliability, in order to provide the customer with Vendors may then typically estimate

  8. RELIABILITY ANALYSIS OF POWER DISTRIBUTION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Popescu V.S.

    2012-04-01

    Full Text Available Power distribution systems are basic parts of power systems and reliability of these systems at present is a key issue for power engineering development and requires special attention. Operation of distribution systems is accompanied by a number of factors that produce random data a large number of unplanned interruptions. Research has shown that the predominant factors that have a significant influence on the reliability of distribution systems are: weather conditions (39.7%, defects in equipment(25% and unknown random factors (20.1%. In the article is studied the influence of random behavior and are presented estimations of reliability of predominantly rural electrical distribution systems.

  9. Reliability of Test Scores in Nonparametric Item Response Theory.

    Science.gov (United States)

    Sijtsma, Klaas; Molenaar, Ivo W.

    1987-01-01

    Three methods for estimating reliability are studied within the context of nonparametric item response theory. Two were proposed originally by Mokken and a third is developed in this paper. Using a Monte Carlo strategy, these three estimation methods are compared with four "classical" lower bounds to reliability. (Author/JAZ)

  10. Test–Retest and Between-Site Reliability in a Multicenter fMRI Study

    Science.gov (United States)

    Friedman, Lee; Stern, Hal; Brown, Gregory G.; Mathalon, Daniel H.; Turner, Jessica; Glover, Gary H.; Gollub, Randy L.; Lauriello, John; Lim, Kelvin O.; Cannon, Tyrone; Greve, Douglas N.; Bockholt, Henry Jeremy; Belger, Aysenil; Mueller, Bryon; Doty, Michael J.; He, Jianchun; Wells, William; Smyth, Padhraic; Pieper, Steve; Kim, Seyoung; Kubicki, Marek; Vangel, Mark; Potkin, Steven G.

    2013-01-01

    In the present report, estimates of test–retest and between-site reliability of fMRI assessments were produced in the context of a multicenter fMRI reliability study (FBIRN Phase 1, www.nbirn.net). Five subjects were scanned on 10 MRI scanners on two occasions. The fMRI task was a simple block design sensorimotor task. The impulse response functions to the stimulation block were derived using an FIR-deconvolution analysis with FMRISTAT. Six functionally-derived ROIs covering the visual, auditory and motor cortices, created from a prior analysis, were used. Two dependent variables were compared: percent signal change and contrast-to-noise-ratio. Reliability was assessed with intraclass correlation coefficients derived from a variance components analysis. Test–retest reliability was high, but initially, between-site reliability was low, indicating a strong contribution from site and site-by-subject variance. However, a number of factors that can markedly improve between-site reliability were uncovered, including increasing the size of the ROIs, adjusting for smoothness differences, and inclusion of additional runs. By employing multiple steps, between-site reliability for 3T scanners was increased by 123%. Dropping one site at a time and assessing reliability can be a useful method of assessing the sensitivity of the results to particular sites. These findings should provide guidance to others on the best practices for future multicenter studies. PMID:17636563

  11. Test-retest and between-site reliability in a multicenter fMRI study.

    Science.gov (United States)

    Friedman, Lee; Stern, Hal; Brown, Gregory G; Mathalon, Daniel H; Turner, Jessica; Glover, Gary H; Gollub, Randy L; Lauriello, John; Lim, Kelvin O; Cannon, Tyrone; Greve, Douglas N; Bockholt, Henry Jeremy; Belger, Aysenil; Mueller, Bryon; Doty, Michael J; He, Jianchun; Wells, William; Smyth, Padhraic; Pieper, Steve; Kim, Seyoung; Kubicki, Marek; Vangel, Mark; Potkin, Steven G

    2008-08-01

    In the present report, estimates of test-retest and between-site reliability of fMRI assessments were produced in the context of a multicenter fMRI reliability study (FBIRN Phase 1, www.nbirn.net). Five subjects were scanned on 10 MRI scanners on two occasions. The fMRI task was a simple block design sensorimotor task. The impulse response functions to the stimulation block were derived using an FIR-deconvolution analysis with FMRISTAT. Six functionally-derived ROIs covering the visual, auditory and motor cortices, created from a prior analysis, were used. Two dependent variables were compared: percent signal change and contrast-to-noise-ratio. Reliability was assessed with intraclass correlation coefficients derived from a variance components analysis. Test-retest reliability was high, but initially, between-site reliability was low, indicating a strong contribution from site and site-by-subject variance. However, a number of factors that can markedly improve between-site reliability were uncovered, including increasing the size of the ROIs, adjusting for smoothness differences, and inclusion of additional runs. By employing multiple steps, between-site reliability for 3T scanners was increased by 123%. Dropping one site at a time and assessing reliability can be a useful method of assessing the sensitivity of the results to particular sites. These findings should provide guidance toothers on the best practices for future multicenter studies.

  12. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  13. Reliability and Availability of Cloud Computing

    CERN Document Server

    Bauer, Eric

    2012-01-01

    A holistic approach to service reliability and availability of cloud computing Reliability and Availability of Cloud Computing provides IS/IT system and solution architects, developers, and engineers with the knowledge needed to assess the impact of virtualization and cloud computing on service reliability and availability. It reveals how to select the most appropriate design for reliability diligence to assure that user expectations are met. Organized in three parts (basics, risk analysis, and recommendations), this resource is accessible to readers of diverse backgrounds and experience le

  14. Reliability of large and complex systems

    CERN Document Server

    Kolowrocki, Krzysztof

    2014-01-01

    Reliability of Large and Complex Systems, previously titled Reliability of Large Systems, is an innovative guide to the current state and reliability of large and complex systems. In addition to revised and updated content on the complexity and safety of large and complex mechanisms, this new edition looks at the reliability of nanosystems, a key research topic in nanotechnology science. The author discusses the importance of safety investigation of critical infrastructures that have aged or have been exposed to varying operational conditions. This reference provides an asympt

  15. Magnetic Resonance Imaging Provides Added Value to the Prostate Cancer Prevention Trial Risk Calculator for Patients With Estimated Risk of High-grade Prostate Cancer Less Than or Equal to 10.

    Science.gov (United States)

    Kim, Eric H; Weaver, John K; Shetty, Anup S; Vetter, Joel M; Andriole, Gerald L; Strope, Seth A

    2017-04-01

    To determine the added value of prostate magnetic resonance imaging (MRI) to the Prostate Cancer Prevention Trial risk calculator. Between January 2012 and December 2015, 339 patients underwent prostate MRI prior to biopsy at our institution. MRI was considered positive if there was at least 1 Prostate Imaging Reporting and Data System 4 or 5 MRI suspicious region. Logistic regression was used to develop 2 models: biopsy outcome as a function of the (1) Prostate Cancer Prevention Trial risk calculator alone and (2) combined with MRI findings. When including all patients, the Prostate Cancer Prevention Trial with and without MRI models performed similarly (area under the curve [AUC] = 0.74 and 0.78, P = .06). When restricting the cohort to patients with estimated risk of high-grade (Gleason ≥7) prostate cancer ≤10%, the model with MRI outperformed the Prostate Cancer Prevention Trial alone model (AUC = 0.69 and 0.60, P = .01). Within this cohort of patients, there was no significant difference in discrimination between models for those with previous negative biopsy (AUC = 0.61 vs 0.63, P = .76), whereas there was a significant improvement in discrimination with the MRI model for biopsy-naïve patients (AUC = 0.72 vs 0.60, P = .01). The use of prostate MRI in addition to the Prostate Cancer Prevention Trial risk calculator provides a significant improvement in clinical risk discrimination for patients with estimated risk of high-grade (Gleason ≥7) prostate cancer ≤10%. Prebiopsy prostate MRI should be strongly considered for these patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Solid State Lighting Reliability Components to Systems

    CERN Document Server

    Fan, XJ

    2013-01-01

    Solid State Lighting Reliability: Components to Systems begins with an explanation of the major benefits of solid state lighting (SSL) when compared to conventional lighting systems including but not limited to long useful lifetimes of 50,000 (or more) hours and high efficacy. When designing effective devices that take advantage of SSL capabilities the reliability of internal components (optics, drive electronics, controls, thermal design) take on critical importance. As such a detailed discussion of reliability from performance at the device level to sub components is included as well as the integrated systems of SSL modules, lamps and luminaires including various failure modes, reliability testing and reliability performance. This book also: Covers the essential reliability theories and practices for current and future development of Solid State Lighting components and systems Provides a systematic overview for not only the state-of-the-art, but also future roadmap and perspectives of Solid State Lighting r...

  17. Development of reliability-corrected cost model for Small Earth Observation satellites

    Science.gov (United States)

    Chang, Young K.; Kim, Hongrae; Kang, Jin S.

    2013-07-01

    In this study, a novel reliability-corrected cost model for estimating the development cost of highly agile small EO (Earth Observation) satellites is presented. In order to develop the cost model of highly agile small EO satellites, a database has been constructed consisting of 49 satellites carrying electro-optical payloads and having a launch mass between 100 kg and 1,000 kg that have either been developed or being developed within a time frame from 1991 to 2011. The cost model of top-down type was developed by analyzing the database statistically. The reliability-corrected CER (Cost Estimating Relationships) developed in this study implement multiple parameters-based complexity indexes. In addition, the Cost Correction Factor (CCF) and Low Cost Small Satellite (LCSS) adjustment factor were newly introduced as additional parameters for cost estimation. The reliability-corrected CERs for 26 EO satellites were used for verification of the cost model developed in this study. It was observed that there are approximately 7% of absolute average errors in the reliability-corrected CER. It is concluded that this cost model can provide cost estimates with a higher accuracy, as compared to conventional cost models such as USCM and SSCM. Finally, this paper also describes the results of cost estimation obtained by applying the developed cost model to highly agile small electro-optical satellites having specific performance requirements.

  18. System Reliability Evaluation Based on Convex Combination Considering Operation and Maintenance Strategy

    Directory of Open Access Journals (Sweden)

    Lijie Li

    2015-01-01

    Full Text Available The approaches to the system reliability evaluation with respect to the cases, where the components are independent or the components have interactive relationships within the system, were proposed in this paper. Starting from the higher requirements on system operational safety and economy, the reliability focused optimal models of multiobjective maintenance strategies were built. For safety-critical systems, the pessimistic maintenance strategies are usually taken, and, in these cases, the system reliability evaluation has also to be tackled pessimistically. For safety-uncritical systems, the optimistic maintenance strategies were usually taken, and, in these circumstances, the system reliability evaluation had also to be tackled optimistically, respectively. Besides, the reasonable maintenance strategies and their corresponding reliability evaluation can be obtained through the convex combination of the above two cases. With a high-speed train system as the example background, the proposed method is verified by combining the actual failure data with the maintenance data. Results demonstrate that the proposed study can provide a new system reliability calculation method and solution to select and optimize the multiobjective operational strategies with the considerations of system safety and economical requirements. The theoretical basis is also provided for scientifically estimating the reliability of a high-speed train system and formulating reasonable maintenance strategies.

  19. Reliability of procedures used for scaling loudness

    DEFF Research Database (Denmark)

    Jesteadt, Walt; Joshi, Suyash Narendra

    2013-01-01

    (ME, MP, CLS; MP, ME, CLS; CLS, ME, MP; CLS, MP, ME), and the order was reversed on the second visit. This design made it possible to compare the reliability of estimates of the slope of the loudness function across procedures in the same listeners. The ME data were well fitted by an inflected...... exponential (INEX) function, but a modified power law was used to obtain slope estimates for both ME and MP. ME and CLS were more reliable than MP. CLS results were consistent across groups, but ME and MP results differed across groups in a way that suggested influence of experience with CLS. Although CLS...

  20. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  1. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Senthil Kumar Murugesan

    2015-01-01

    Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  2. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation.

    Science.gov (United States)

    Murugesan, Senthil Kumar; Balasubramanian, Chidhambara Rajan

    2015-01-01

    Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  3. System for estimating fatigue damage

    Science.gov (United States)

    LeMonds, Jeffrey; Guzzo, Judith Ann; Liu, Shaopeng; Dani, Uttara Ashwin

    2017-03-14

    In one aspect, a system for estimating fatigue damage in a riser string is provided. The system includes a plurality of accelerometers which can be deployed along a riser string and a communications link to transmit accelerometer data from the plurality of accelerometers to one or more data processors in real time. With data from a limited number of accelerometers located at sensor locations, the system estimates an optimized current profile along the entire length of the riser including riser locations where no accelerometer is present. The optimized current profile is then used to estimate damage rates to individual riser components and to update a total accumulated damage to individual riser components. The number of sensor locations is small relative to the length of a deepwater riser string, and a riser string several miles long can be reliably monitored along its entire length by fewer than twenty sensor locations.

  4. Estimation and valuation in accounting

    Directory of Open Access Journals (Sweden)

    Cicilia Ionescu

    2014-03-01

    Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.

  5. Reliable design of electronic equipment an engineering guide

    CERN Document Server

    Natarajan, Dhanasekharan

    2014-01-01

    This book explains reliability techniques with examples from electronics design for the benefit of engineers. It presents the application of de-rating, FMEA, overstress analyses and reliability improvement tests for designing reliable electronic equipment. Adequate information is provided for designing computerized reliability database system to support the application of the techniques by designers. Pedantic terms and the associated mathematics of reliability engineering discipline are excluded for the benefit of comprehensiveness and practical applications. This book offers excellent support

  6. Models of network reliability analysis, combinatorics, and Monte Carlo

    CERN Document Server

    Gertsbakh, Ilya B

    2009-01-01

    Unique in its approach, Models of Network Reliability: Analysis, Combinatorics, and Monte Carlo provides a brief introduction to Monte Carlo methods along with a concise exposition of reliability theory ideas. From there, the text investigates a collection of principal network reliability models, such as terminal connectivity for networks with unreliable edges and/or nodes, network lifetime distribution in the process of its destruction, network stationary behavior for renewable components, importance measures of network elements, reliability gradient, and network optimal reliability synthesis

  7. Reliability of optical fibres and components final report of COST 246

    CERN Document Server

    Griffioen, Willem; Gadonna, Michel; Limberger, Hans; Heens, Bernard; Knuuttila, Hanna; Kurkjian, Charles; Mirza, Shehzad; Opacic, Aleksandar; Regio, Paola; Semjonov, Sergei

    1999-01-01

    Reliability of Optical Fibres and Components reports the findings of COST 246 (1993-1998) - European research initiative in the field of optical telecommunications. Experts in the materials and reliability field of optical fibres and components have contributed to this unique study programme. The results, conclusions and achievements of their work have been obtained through joint experimentation and discussion with representatives from manufacturing and research groups. Topics covered include: Lifetime estimation; Failure mechanisms; Ageing test methods; Field data and service environments for components. For the first time the reader can explore the reliability of products and examine the results and conclusions in published form. This comprehensive volume is intended to provide a deeper understanding of the reliability of optical fibres and components. The book will be extremely useful to all scientists and practitioners involved in the industry.

  8. On Reliability and Validity

    OpenAIRE

    MARTINS, Gilberto de Andrade

    2007-01-01

    In order to assess, evaluate or quantify financial, equity, auditing and controllership oriented data related both to private and public sectors, the practioner or the researcher has to pay close attention to the significance and accurateness criteria of the research tools he is about to employ: validity and reliability. The validity criterion refers to the instrument capacity of assessing what it intends to assess; reliability deals with the constancy of results when the same individual or o...

  9. Reliability and Availability Studies in the RIA Linac Driver

    CERN Document Server

    Schnirman-Lessner, Eliane

    2005-01-01

    The RIA facility will include various complex systems and must provide radioactive beams to many users simultaneously. The availability of radioactive beams for most experiments at the fully-commissioned facility should be as high as possible within design cost limitations. To make a realistic estimate of the achievable reliability a detailed analysis is required. The RIA driver linac is a complex machine containing a large number of SC resonators and capable of accelerating multiple-charge-state beams. At the pre-CDR stage of the design it is essential to identify critical facility subsystem failures that can prevent the driver linac from operating. The reliability and availability of the driver linac are studied using expert information and data from operating machines such as ATLAS, APS, JLab, and LANL. Availability studies are performed with a Monte-Carlo simulation code previously applied to availability assessments of the NLC facility [http://www.slac.stanford.edu/xorg/accelops/Full/LCoptsfull] and the ...

  10. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps.

    Science.gov (United States)

    Varikuti, Deepthi P; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T; Eickhoff, Simon B

    2017-04-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that gray matter masking improved the reliability of connectivity estimates, whereas denoising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources.

  11. Comparison of maintenance scheme effects on power transformer reliability performance

    Directory of Open Access Journals (Sweden)

    I. C. Durán

    2015-11-01

    Full Text Available Power transformer failures have a significant impact on electrical systems reliability, as they compromise the electricity delivery. Failures impact the operational cost of electric utilities as they endanger the accomplishment of power commitments. Power transformer failure rate can be modeled from its health condition, which is determined from different condition parameters. This paper is aimed to compare different maintenance schemes that allow the asset to be available the longest possible time. The power transformer operation is represented considering actual electric load profiles and temperature. Electrical and thermal transformer parameters are taken from manufacturer tests and technical standards. An electrical-thermal model is employed to estimate the life span reduction and temperature changes. From life span and loading, the health condition is determined, so as the effects of different maintenance schemes. The failure rates are calculated for unmaintained conditions and different maintained schemes, allowing to determine their effects on transformer reliability. Optimal maintenance schemes are determined and the expected benefits in failure rate reduction, asset reliability and life span extension are presented. As a result, it was found that certain time-based maintenance schemes provide better transformer reliability performance in comparison to condition based schemes, allowing to reduce the interruptions and extending the life of the transformer.

  12. Comparison of the effectiveness of phalanges vs. humeri and femurs to estimate lizard age with skeletochronology

    Energy Technology Data Exchange (ETDEWEB)

    Comas, M.; Reguera, S.; Zamora-Camacho, F.J.; Salvado, H.; Moreno-Rueda, G.

    2016-07-01

    Skeletochronology allows estimation of lizard age with a single capture (from a bone), making long–term monitoring unnecessary. Nevertheless, this method often involves the death of the animal to obtain the bone. We tested the reliability of skeletochronology of phalanges (which may be obtained without killing) by comparing the estimated age from femurs and humeri with the age estimated from phalanges. Our results show skeletochronology of phalanges is a reliable method to estimate age in lizards as cross–section readings from all bones studied presented a high correlation and repeatability regardless of the bone chosen. This approach provides an alternative to the killing of lizards for skeletochronology studies. (Author)

  13. Toward Improved Reliability of Seasonal Hydrologic Forecast: Accounting for Initial Condition and State-Parameter Uncertainties

    Science.gov (United States)

    DeChant, C. M.; Moradkhani, H.

    2012-12-01

    Providing reliable estimates of seasonal water supply is a primary goal in operational hydro-meteorological prediction. In order to achieve this goal, it is accepted that hydrologists must accurately estimate forecast initial conditions (land surface states prior to forecast) and the future climate conditions, and quantify the uncertainty in these two forecast stages to provide a full estimation of the uncertainty in a given forecast. Recent work has highlighted the benefits of such a framework through advancing both land surface state estimation techniques and future climate estimation/modeling, within the operational Ensemble Streamflow Prediction (ESP) methodology. Often overlooked in this framework, the uncertainty in land surface state estimates play a key role in providing reliable seasonal forecasts. In order to quantify and reduce this uncertainty, land surface state-parameter estimation, through ensemble data assimilation, is performed with observations of snow and streamflow in a mountainous basin. Through incorporation of both snow and streamflow data for estimation of land surface states and parameters, the quantity of water stored at the land surface can be estimated, and parameter uncertainty can be estimated for seasonal simulations. With the inclusion of parameter uncertainty in the hydrologic forecasting framework, more robust quantification of hydrologic uncertainty is possible, leading to more useful forecasts for end users. This study seeks to examine the role of combined state-parameter estimation for characterization of initial conditions with the potential to be formally adopted in operational ESP framework, and validates results with probabilistic verification of both ESP and ESP with state-parameter estimation.

  14. Composite system reliability evaluation using sequential Monte Carlo simulation

    Science.gov (United States)

    Jonnavithula, Annapoorani

    Monte Carlo simulation methods can be effectively used to assess the adequacy of composite power system networks. The sequential simulation approach is the most fundamental technique available and can be used to provide a wide range of indices. It can also be used to provide estimates which can serve as benchmarks against which other approximate techniques can be compared. The focus of this research work is on the reliability evaluation of composite generation and transmission systems with special reference to frequency and duration related indices and estimated power interruption costs at each load bus. One of the main objectives is to use the sequential simulation method to create a comprehensive technique for composite system adequacy evaluation. This thesis recognizes the need for an accurate representation of the load model at the load buses which depends on the mix of customer sectors at each bus. Chronological hourly load curves are developed in this thesis, recognizing the individual load profiles of the customers at each load bus. Reliability worth considerations are playing an ever increasing role in power system planning and operation. Different methods for bus outage cost evaluation are proposed in this thesis. It may not be computationally feasible to use the sequential simulation method with time varying loads at each bus in large electric power system networks. Time varying load data may also not be available at each bus. This research work uses the sequential methodology as a fundamental technique to calibrate other non sequential methods such as the state sampling and state transition sampling techniques. Variance reduction techniques that improve the efficiency of the sequential simulation procedure are investigated as a part of this research work. Pertinent features that influence reliability worth assessment are also incorporated. All the proposed methods in this thesis are illustrated by application to two reliability test systems. In addition

  15. Gearbox Reliability Collaborative (GRC) Description and Loading

    Energy Technology Data Exchange (ETDEWEB)

    Oyague, F.

    2011-11-01

    This document describes simulated turbine load cases in accordance to the IEC 61400-1 Ed.3 standard, which is representative of the typical wind turbine design process. The information presented herein is intended to provide a broad understanding of the gearbox reliability collaborative 750kW drivetrain and turbine configuration. In addition, fatigue and ultimate strength drivetrain loads resulting from simulations are presented. This information provides the bases for the analytical work of the gearbox reliability collaborative effort.

  16. Telecommunications system reliability engineering theory and practice

    CERN Document Server

    Ayers, Mark L

    2012-01-01

    "Increasing system complexity require new, more sophisticated tools for system modeling and metric calculation. Bringing the field up to date, this book provides telecommunications engineers with practical tools for analyzing, calculating, and reporting availability, reliability, and maintainability metrics. It gives the background in system reliability theory and covers in-depth applications in fiber optic networks, microwave networks, satellite networks, power systems, and facilities management. Computer programming tools for simulating the approaches presented, using the Matlab software suite, are also provided"

  17. Reliability Correction for Functional Connectivity: Theory and Implementation

    Science.gov (United States)

    Mueller, Sophia; Wang, Danhong; Fox, Michael D.; Pan, Ruiqi; Lu, Jie; Li, Kuncheng; Sun, Wei; Buckner, Randy L.; Liu, Hesheng

    2016-01-01

    Network properties can be estimated using functional connectivity MRI (fcMRI). However, regional variation of the fMRI signal causes systematic biases in network estimates including correlation attenuation in regions of low measurement reliability. Here we computed the spatial distribution of fcMRI reliability using longitudinal fcMRI datasets and demonstrated how pre-estimated reliability maps can correct for correlation attenuation. As a test case of reliability-based attenuation correction we estimated properties of the default network, where reliability was significantly lower than average in the medial temporal lobe and higher in the posterior medial cortex, heterogeneity that impacts estimation of the network. Accounting for this bias using attenuation correction revealed that the medial temporal lobe’s contribution to the default network is typically underestimated. To render this approach useful to a greater number of datasets, we demonstrate that test-retest reliability maps derived from repeated runs within a single scanning session can be used as a surrogate for multi-session reliability mapping. Using data segments with different scan lengths between 1 and 30 min, we found that test-retest reliability of connectivity estimates increases with scan length while the spatial distribution of reliability is relatively stable even at short scan lengths. Finally, analyses of tertiary data revealed that reliability distribution is influenced by age, neuropsychiatric status and scanner type, suggesting that reliability correction may be especially important when studying between-group differences. Collectively, these results illustrate that reliability-based attenuation correction is an easily implemented strategy that mitigates certain features of fMRI signal nonuniformity. PMID:26493163

  18. Photovoltaic module reliability workshop

    Science.gov (United States)

    Mrig, L.

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986 to 1990. The reliability photovoltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warrantees available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the U.S., PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  19. Photovoltaic module reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Mrig, L. (ed.)

    1990-01-01

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  20. Using small area estimation and Lidar-derived variables for multivariate prediction of forest attributes

    Science.gov (United States)

    F. Mauro; Vicente Monleon; H. Temesgen

    2015-01-01

    Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...

  1. Critically re-evaluating a common technique: Accuracy, reliability, and confirmation bias of EMG.

    Science.gov (United States)

    Narayanaswami, Pushpa; Geisbush, Thomas; Jones, Lyell; Weiss, Michael; Mozaffar, Tahseen; Gronseth, Gary; Rutkove, Seward B

    2016-01-19

    (1) To assess the diagnostic accuracy of EMG in radiculopathy. (2) To evaluate the intrarater reliability and interrater reliability of EMG in radiculopathy. (3) To assess the presence of confirmation bias in EMG. Three experienced academic electromyographers interpreted 3 compact discs with 20 EMG videos (10 normal, 10 radiculopathy) in a blinded, standardized fashion without information regarding the nature of the study. The EMGs were interpreted 3 times (discs A, B, C) 1 month apart. Clinical information was provided only with disc C. Intrarater reliability was calculated by comparing interpretations in discs A and B, interrater reliability by comparing interpretation between reviewers. Confirmation bias was estimated by the difference in correct interpretations when clinical information was provided. Sensitivity was similar to previous reports (77%, confidence interval [CI] 63%-90%); specificity was 71%, CI 56%-85%. Intrarater reliability was good (κ 0.61, 95% CI 0.41-0.81); interrater reliability was lower (κ 0.53, CI 0.35-0.71). There was no substantial confirmation bias when clinical information was provided (absolute difference in correct responses 2.2%, CI -13.3% to 17.7%); the study lacked precision to exclude moderate confirmation bias. This study supports that (1) serial EMG studies should be performed by the same electromyographer since intrarater reliability is better than interrater reliability; (2) knowledge of clinical information does not bias EMG interpretation substantially; (3) EMG has moderate diagnostic accuracy for radiculopathy with modest specificity and electromyographers should exercise caution interpreting mild abnormalities. This study provides Class III evidence that EMG has moderate diagnostic accuracy and specificity for radiculopathy. © 2015 American Academy of Neurology.

  2. A Review of Expertise and Judgment Processes for Risk Estimation

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    A major challenge of risk and reliability analysis for human errors or hardware failures is the need to enlist expert opinion in areas for which adequate operational data are not available. Experts enlisted in this capacity provide probabilistic estimates of reliability, typically comprised of a measure of central tendency and uncertainty bounds. While formal guidelines for expert elicitation are readily available, they largely fail to provide a theoretical basis for expertise and judgment. This paper reviews expertise and judgment in the context of risk analysis; overviews judgment biases, the role of training, and multivariate judgments; and provides guidance on the appropriate use of atomistic and holistic judgment processes.

  3. Reliability-Based Calibration of Partial Safety Factors for Wind Turbine Blades

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Branner, Kim; Sørensen, John Dalsgaard

    2011-01-01

    The reliability of a wind turbine blade can be estimated using a response surface technique, the First Order Reliability Method (FORM) and Monte Carlo simulation. The response surface is here estimated based on nonlinear finite element analysis by which nonlinear failure modes due to e.g. bucklin...... of the unidirectional laminas. For this failure mode the reliability is estimated along the centreline of the main spar cap. The results show significant variations in the reliability along the blade length....

  4. Introduction to quality and reliability engineering

    CERN Document Server

    Jiang, Renyan

    2015-01-01

    This book presents the state-of-the-art in quality and reliability engineering from a product life cycle standpoint. Topics in reliability include reliability models, life data analysis and modeling, design for reliability and accelerated life testing, while topics in quality include design for quality, acceptance sampling and supplier selection, statistical process control, production tests such as screening and burn-in, warranty and maintenance. The book provides comprehensive insights into two closely related subjects, and includes a wealth of examples and problems to enhance reader comprehension and link theory and practice. All numerical examples can be easily solved using Microsoft Excel. The book is intended for senior undergraduate and post-graduate students in related engineering and management programs such as mechanical engineering, manufacturing engineering, industrial engineering and engineering management programs, as well as for researchers and engineers in the quality and reliability fields. D...

  5. Reliability Analysis of a Mono-Tower Platform

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Enevoldsen, I.; Sørensen, John Dalsgaard

    1990-01-01

    In this paper, a reliability analysis of a Mono-tower platform is presented. Te failure modes considered are yielding in the tube cross sections and fatigue failure in the butts welds. The fatigue failrue mode is investigated with a fatigue model, where the fatigue strength is expressed through SN...... relations. In determining the cumulative fatigue damage, Palmgren-Miner's rule is applied. Element reliability as well as systems reliability, is estimated using first-order reliability methods(FORM). The sensitivity of the systems reliability to various parameters is investigated. It is shown...

  6. Intrarater range of motion reliability in cerebral palsy: a comparison of assessment methods.

    Science.gov (United States)

    Glanzman, Allan M; Swenson, Amy E; Kim, Heakyung

    2008-01-01

    To compare intrarater reliability of goniometry performed with and without an assistant and compare the reliability of the Staheli and Thomas tests of hip extension. Visual estimation was also evaluated as a method of range of motion assessment. Twenty-five children with cerebral palsy (50 legs) were evaluated in a blind fashion. Interclass correlations (ICCs) ranged from 0.9701 to 0.9804 and from 0.9685 to 0.9822 for 1 and 2-person goniometry, respectively. Pearson product moment correlations of 0.8944 to 0.9553 for visual estimation were established. Staheli and Thomas test ICCs were 0.9793 and 0.9804, respectively. Goniometry with 1 and 2 assessors both produced ICCs in the excellent range as did the Staheli and Thomas test measurements of hip extension. The use of an assistant did not provide additional benefit. Visual estimation showed excellent correlation with goniometry.

  7. Cost-estimating for commercial digital printing

    Science.gov (United States)

    Keif, Malcolm G.

    2007-01-01

    The purpose of this study is to document current cost-estimating practices used in commercial digital printing. A research study was conducted to determine the use of cost-estimating in commercial digital printing companies. This study answers the questions: 1) What methods are currently being used to estimate digital printing? 2) What is the relationship between estimating and pricing digital printing? 3) To what extent, if at all, do digital printers use full-absorption, all-inclusive hourly rates for estimating? Three different digital printing models were identified: 1) Traditional print providers, who supplement their offset presswork with digital printing for short-run color and versioned commercial print; 2) "Low-touch" print providers, who leverage the power of the Internet to streamline business transactions with digital storefronts; 3) Marketing solutions providers, who see printing less as a discrete manufacturing process and more as a component of a complete marketing campaign. Each model approaches estimating differently. Understanding and predicting costs can be extremely beneficial. Establishing a reliable system to estimate those costs can be somewhat challenging though. Unquestionably, cost-estimating digital printing will increase in relevance in the years ahead, as margins tighten and cost knowledge becomes increasingly more critical.

  8. Markov chain modelling of reliability analysis and prediction under mixed mode loading

    Science.gov (United States)

    Singh, Salvinder; Abdullah, Shahrum; Nik Mohamed, Nik Abdullah; Mohd Noorani, Mohd Salmi

    2015-03-01

    The reliability assessment for an automobile crankshaft provides an important understanding in dealing with the design life of the component in order to eliminate or reduce the likelihood of failure and safety risks. The failures of the crankshafts are considered as a catastrophic failure that leads towards a severe failure of the engine block and its other connecting subcomponents. The reliability of an automotive crankshaft under mixed mode loading using the Markov Chain Model is studied. The Markov Chain is modelled by using a two-state condition to represent the bending and torsion loads that would occur on the crankshaft. The automotive crankshaft represents a good case study of a component under mixed mode loading due to the rotating bending and torsion stresses. An estimation of the Weibull shape parameter is used to obtain the probability density function, cumulative distribution function, hazard and reliability rate functions, the bathtub curve and the mean time to failure. The various properties of the shape parameter is used to model the failure characteristic through the bathtub curve is shown. Likewise, an understanding of the patterns posed by the hazard rate onto the component can be used to improve the design and increase the life cycle based on the reliability and dependability of the component. The proposed reliability assessment provides an accurate, efficient, fast and cost effective reliability analysis in contrast to costly and lengthy experimental techniques.

  9. Reliability and safety program plan outline for the operational phase of a waste isolation facility

    Energy Technology Data Exchange (ETDEWEB)

    Ammer, H.G.; Wood, D.E.

    1977-03-28

    A Reliability and Safety Program plan outline has been prepared for the operational phase of a Waste Isolation Facility. The program includes major functions of risk assessment, technical support activities, quality assurance, operational safety, configuration monitoring, reliability analysis and support and coordination meetings. Detailed activity or task descriptions are included for each function. Activities are time-phased and presented in the PERT format for scheduling and interactions. Task descriptions include manloading, travel, and computer time estimates to provide data for future costing. The program outlined here will be used to provide guidance from a reliability and safety standpoint to design, procurement, construction, and operation of repositories for nuclear waste. These repositories are to be constructed under the National Waste Terminal Storage program under the direction of the Office of Waste Isolation, Union Carbide Corp. Nuclear Division.

  10. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...

  11. Reliability based structural design

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2013-01-01

    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A

  12. The value of reliability

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Karlström, Anders

    2010-01-01

    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...

  13. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  14. Reliability of Plastic Slabs

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    1989-01-01

    In the paper it is shown how upper and lower bounds for the reliability of plastic slabs can be determined. For the fundamental case it is shown that optimal bounds of a deterministic and a stochastic analysis are obtained on the basis of the same failure mechanisms and the same stress fields....

  15. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  16. Parametric Mass Reliability Study

    Science.gov (United States)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  17. Reliability of semiology description.

    Science.gov (United States)

    Heo, Jae-Hyeok; Kim, Dong Wook; Lee, Seo-Young; Cho, Jinwhan; Lee, Sang-Kun; Nam, Hyunwoo

    2008-01-01

    Seizure semiology is important for classifying patients' epilepsy. Physicians usually get most of the seizure information from observers though there have been few reports on the reliability of the observers' description. This study aims at determining the reliability of observers' description of the semiology. We included 92 patients who had their habitual seizures recorded during video-EEG monitoring. We compared the semiology described by the observers with that recorded on the videotape, and reviewed which characteristics of the observers affected the reliability of their reported data. The classification of seizures and the individual components of the semiology based only on the observer-description was somewhat discordant compared with the findings from the videotape (correct classification, 85%). The descriptions of some ictal behaviors such as oroalimentary automatism, tonic/dystonic limb posturing, and head versions were relatively accurate, but those of motionless staring and hand automatism were less accurate. The specified directions by the observers were relatively correct. The accuracy of the description was related to the educational level of the observers. Much of the information described by well-educated observers is reliable. However, every physician should keep in mind the limitations of this information and use this information cautiously.

  18. Reliability measurement without limits

    NARCIS (Netherlands)

    Reidsma, Dennis; Carletta, J.

    In computational linguistics, a reliability measurement of 0.8 on some statistic such as $\\kappa$ is widely thought to guarantee that hand-coded data is fit for purpose, with lower values suspect. We demonstrate that the main use of such data, machine learning, can tolerate data with a low

  19. High reliability organizations

    NARCIS (Netherlands)

    Gallis, R.; Zwetsloot, G.I.J.M.

    2014-01-01

    High Reliability Organizations (HRO’s) are organizations that constantly face serious and complex (safety) risks yet succeed in realising an excellent safety performance. In such situations acceptable levels of safety cannot be achieved by traditional safety management only. HRO’s manage safety

  20. Postcranial sex estimation of individuals considered Hispanic.

    Science.gov (United States)

    Tise, Meredith L; Spradley, M Katherine; Anderson, Bruce E

    2013-01-01

    When forensic anthropologists estimate the sex of Hispanic skeletal remains using nonpopulation specific metric methods, initial observations cause males to frequently misclassify as female. To help improve these methods, this research uses postcranial measurements from United States-Mexico border migrant fatalities at the Pima County Office of the Medical Examiner in Tucson, Arizona, as well as Hispanic individuals from the Forensic Anthropology Data Bank. Using a total of 114 males and 28 females, sectioning points and discriminant functions provide classification rates as high as 89.43% for Hispanic individuals. A test sample assessed the reliability of these techniques resulting in accuracy up to 99.65%. The clavicle maximum length measurement provides the best univariate estimate of sex, while the radius provides the best multivariate estimated of sex. The results of this research highlight the need for population specific data in the creation of a biological profile, especially when working with individuals considered Hispanic. © 2012 American Academy of Forensic Sciences.