Sample records for reliable observational records

  1. [Reliability of Primary Care computerised medication records].

    García-Molina Sáez, Celia; Urbieta Sanz, Elena; Madrigal de Torres, Manuel; Piñera Salmerón, Pascual; Pérez Cárceles, María D


    To quantify and to evaluate the reliability of Primary Care (PC) computerised medication records of as an information source of patient chronic medications, and to identify associated factors with the presence of discrepancies. A descriptive cross-sectional study. General Referral Hospital in Murcia. Patients admitted to the cardiology-chest diseases unit, during the months of February to April 2013, on home treatment, who agreed to participate in the study. Evaluation of the reliability of Primary Care computerised medication records by analysing the concordance, by identifying discrepancies, between the active medication in these records and that recorded in pharmacist interview with the patient/caregiver. Identification of associated factors with the presence of discrepancies was analysed using a multivariate logistic regression. The study included a total of 308 patients with a mean of 70.9 years (13.0 SD). The concordance of active ingredients was 83.7%, and this decreased to 34.7% when taking the dosage into account. Discrepancies were found in 97.1% of patients. The most frequent discrepancy was omission of frequency (35.6%), commission (drug added unjustifiably) (14.6%), and drug omission (12.7%). Age older than 65 years (1.98 [1.08 to 3.64]), multiple chronic diseases (1.89 [1.04 to 3.42]), and have a narcotic or psychotropic drug prescribed (2.22 [1.16 to 4.24]), were the factors associated with the presence of discrepancies. Primary Care computerised medication records, although of undoubted interest, are not be reliable enough to be used as the sole source of information on patient chronic medications when admitted to hospital. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  2. Software for recording observational files.

    Mendo, A H; Argilaga, M T; Rivera, M A


    We offer the new software Codex, written in Visual Basic 3.0. It is a tool adequate in observational methodology. Its fundamental objective is to record motor and verbal behavior using the data types proposed by Bakeman and Quera (1995, 1996), together with the field formats proposed by Hall (1963), Weick (1968), Hutt and Hutt (1974), and Anguera (1979). It is designed to allow for data interchange between specific programs in use in observational methodology (SDIS-GSEQ, The Observer, and Theme) and other general programs (spread sheets, statistics applications, word processing programs, sound cards, etc.).

  3. Updating piping reliability with field performance observations

    Schweckendiek, T.; Vrouwenvelder, A.C.W.M.; Calle, E.O.F.


    Flood defenses are crucial elements in flood risk mitigation in developed countries, especially in deltaic areas. In the Netherlands, the VNK2 project is currently analyzing the reliability of all primary flood defenses as part of a nationwide flood risk analysis. In this project, as in most other r

  4. Reliability of recordings of subgingival calculus detected using an ultrasonic device.

    Corraini, Priscila; López, Rodrigo


    To assess the intra-examiner reliability of recordings of subgingival calculus detected using an ultrasonic device, and to investigate the influence of subject-, tooth- and site-level factors on the reliability of these subgingival calculus recordings. On two occasions, within a 1-week interval, 147 adult periodontitis patients received a full-mouth clinical periodontal examination by a single trained examiner. Duplicate subgingival calculus recordings, in six sites per tooth, were obtained using an ultrasonic device for calculus detection and removal. Agreement was observed in 65 % of the 22,584 duplicate subgingival calculus recordings, ranging 45 % to 83 % according to subject. Using hierarchical modeling, disagreements in the subgingival calculus duplicate recordings were more likely in all other sites than the mid-buccal, and in sites harboring supragingival calculus. Disagreements were less likely in sites with PD ≥  4 mm and with furcation involvement  ≥  degree 2. Bleeding on probing or suppuration did not influence the reliability of subgingival calculus. At the subject-level, disagreements were less likely in patients presenting with the highest and lowest extent categories of the covariate subgingival calculus. The reliability of subgingival calculus recordings using the ultrasound technology is reasonable. The results of the present study suggest that the reliability of subgingival calculus recordings is not influenced by the presence of inflammation. Moreover, subgingival calculus can be more reliably detected using the ultrasound device at sites with higher need for periodontal therapy, i.e., sites presenting with deep pockets and premolars and molars with furcation involvement.

  5. A network communication and recording system for digital seismic observation

    WANG Hong-ti; ZHUANG Can-tao; XUE Bing; LI Jiang; CHEN Yang; ZHU Xiao-yi; LOU Wen-yu; LIU Ming-hui


    A network communication and recording system based on China-made ARCA SOC and embedded Linux operating system is introduced in this paper. It supports TCP/IP network communication protocol and mass storage medium. It has strong points of self-monitor, low power consumption, high timing accuracy, high reliability of operation, etc. It can serve up to 20 centers real-time waveform data at the same time. It meets not only the requirements of physical networking observation, but also virtual networking observation based on Intemet in which real-time data transmission is required. Its ability of field recording also meets the requirements of portable seismic observation, strong motion observation and seismic exploration observation, etc.

  6. A simple method to evaluate the reliability of OWAS observations.

    de Bruijn, I; Engels, J A; van der Gulden, J W


    Slides showing nurses in different working postures were used to determine the reliability of OWAS observations. Each slide could be looked at for 3 seconds, while a new slide was shown every 30 seconds to resemble the normal practice of observation. Two observers twice scored a series of slides, some of them being identical at both viewings. To reduce effects of recall there was a time interval of 4 weeks or more between the two viewings and the slides were in a different order the second time. Different series were used to evaluate inter- and intra-observer reliability. The OWAS scores of corresponding slides were compared. In almost all comparisons percentages of agreement over 85% and kappa's over 0.6 were found, which is considered as good agreement. The procedure described seems to be a useful and simple technique to determine such reliability.

  7. Inter-observer reliability of DSM-5 substance use disorders.

    Denis, Cécile M; Gelernter, Joel; Hart, Amy B; Kranzler, Henry R


    Although studies have examined the impact of changes made in DSM-5 on the estimated prevalence of substance use disorder (SUD) diagnoses, there is limited evidence concerning the reliability of DSM-5 SUDs. We evaluated the inter-observer reliability of four DSM-5 SUDs in a sample in which we had previously evaluated the reliability of DSM-IV diagnoses, allowing us to compare the two systems. Two different interviewers each assessed 173 subjects over a 2-week period using the Semi-Structured Assessment for Drug Dependence and Alcoholism (SSADDA). Using the percent agreement and kappa (κ) coefficient, we examined the reliability of DSM-5 lifetime alcohol, opioid, cocaine, and cannabis use disorders, which we compared to that of SSADDA-derived DSM-IV SUD diagnoses. We also assessed the effect of additional lifetime SUD and lifetime mood or anxiety disorder diagnoses on the reliability of the DSM-5 SUD diagnoses. Reliability was good to excellent for the four disorders, with κ values ranging from 0.65 to 0.94. Agreement was consistently lower for SUDs of mild severity than for moderate or severe disorders. DSM-5 SUD diagnoses showed greater reliability than DSM-IV diagnoses of abuse or dependence or dependence only. Co-occurring SUD and lifetime mood or anxiety disorders exerted a modest effect on the reliability of the DSM-5 SUD diagnoses. For alcohol, opioid, cocaine and cannabis use disorders, DSM-5 criteria and diagnoses are at least as reliable as those of DSM-IV. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Inter-Observer Reliability of DSM-5 Substance Use Disorders*

    Denis, Cécile M.; Gelernter, Joel; Hart, Amy B.; Kranzler, Henry R.


    Aims Although studies have examined the impact of changes made in DSM-5 on the estimated prevalence of substance use disorder (SUD) diagnoses, there is limited evidence of the reliability of DSM-5 SUDs. We evaluated the inter-observer reliability of four DSM-5 SUDs in a sample in which we had previously evaluated the reliability of DSM-IV diagnoses, allowing us to compare the two systems. Methods Two different interviewers each assessed 173 subjects over a 2-week period using the Semi-Structured Assessment for Drug Dependence and Alcoholism (SSADDA). Using the percent agreement and kappa (κ) coefficient, we examined the reliability of DSM-5 lifetime alcohol, opioid, cocaine, and cannabis use disorders, which we compared to that of SSADDA-derived DSM-IV SUD diagnoses. We also assessed the effect of additional lifetime SUD and lifetime mood or anxiety disorder diagnoses on the reliability of the DSM-5 SUD diagnoses. Results Reliability was good to excellent for the four disorders, with κ values ranging from 0.65 to 0.94. Agreement was consistently lower for SUDs of mild severity than for moderate or severe disorders. DSM-5 SUD diagnoses showed greater reliability than DSM-IV diagnoses of abuse or dependence or dependence only. Co-occurring SUD and lifetime mood or anxiety disorders exerted a modest effect on the reliability of the DSM-5 SUD diagnoses. Conclusions For alcohol, opioid, cocaine and cannabis use disorders, DSM-5 criteria and diagnoses are at least as reliable as those of DSM-IV. PMID:26048641

  9. Photovoltaic Reliability Group activities in USA and Brazil (Presentation Recording)

    Dhere, Neelkanth G.; Cruz, Leila R. O.


    Recently prices of photovoltaic (PV) systems have been reduced considerably and may continue to be reduced making them attractive. If these systems provide electricity over the stipulated warranty period, it would be possible attain socket parity within the next few years. Current photovoltaic module qualifications tests help in minimizing infant mortality but do not guarantee useful lifetime over the warranty period. The PV Module Quality Assurance Task Force (PVQAT) is trying to formulate accelerated tests that will be useful towards achieving the ultimate goal of assuring useful lifetime over the warranty period as well as to assure manufacturing quality. Unfortunately, assuring the manufacturing quality may require 24/7 presence. Alternatively, collecting data on the performance of fielded systems would assist in assuring manufacturing quality. Here PV systems installed by home-owners and small businesses can constitute as an important untapped source of data. The volunteer group, PV - Reliable, Safe and Sustainable Quality! (PVRessQ!) is providing valuable service to small PV system owners. Photovoltaic Reliability Group (PVRG) is initiating activities in USA and Brazil to assist home owners and small businesses in monitoring photovoltaic (PV) module performance and enforcing warranty. It will work in collaboration with small PV system owners, consumer protection agencies. Brazil is endowed with excellent solar irradiance making it attractive for installation of PV systems. Participating owners of small PV systems would instruct inverter manufacturers to copy the daily e-mails to PVRG and as necessary, will authorize the PVRG to carry out review of PV systems. The presentation will consist of overall activities of PVRG in USA and Brazil.

  10. Assessing physical activity during youth sport: the Observational System for Recording Activity in Children: Youth Sports.

    Cohen, Alysia; McDonald, Samantha; McIver, Kerry; Pate, Russell; Trost, Stewart


    The purpose of this study was to evaluate the validity and interrater reliability of the Observational System for Recording Activity in Children: Youth Sports (OSRAC:YS). Children (N = 29) participating in a parks and recreation soccer program were observed during regularly scheduled practices. Physical activity (PA) intensity and contextual factors were recorded by momentary time-sampling procedures (10-second observe, 20-second record). Two observers simultaneously observed and recorded children's PA intensity, practice context, social context, coach behavior, and coach proximity. Interrater reliability was based on agreement (Kappa) between the observer's coding for each category, and the Intraclass Correlation Coefficient (ICC) for percent of time spent in MVPA. Validity was assessed by calculating the correlation between OSRAC:YS estimated and objectively measured MVPA. Kappa statistics for each category demonstrated substantial to almost perfect interobserver agreement (Kappa = 0.67-0.93). The ICC for percent time in MVPA was 0.76 (95% C.I. = 0.49-0.90). A significant correlation (r = .73) was observed for MVPA recorded by observation and MVPA measured via accelerometry. The results indicate the OSRAC:YS is a reliable and valid tool for measuring children's PA and contextual factors during a youth soccer practice.

  11. Reproducibility and reliability of hypoglycaemic episodes recorded with Continuous Glucose Monitoring System (CGMS) in daily life

    Høi-Hansen, T; Pedersen-Bjergaard, U; Thorsteinsson, B


    AIM: Continuous glucose monitoring may reveal episodes of unrecognized hypoglycaemia. We evaluated reproducibility and reliability of hypoglycaemic episodes recorded in daily life by the Medtronic MiniMed Continuous Glucose Monitoring System (CGMS). METHODS: Twenty-nine adult patients with Type 1...

  12. Reliability of videotaped observational gait analysis in patients with orthopedic impairments

    Brunnekreef, J.J.; Uden, C. van; Moorsel, S. van; Kooloos, J.G.M.


    BACKGROUND: In clinical practice, visual gait observation is often used to determine gait disorders and to evaluate treatment. Several reliability studies on observational gait analysis have been described in the literature and generally showed moderate reliability. However, patients with orthopedic

  13. Problems of reliability in earthquake parameters determination from historicaI records

    G. Monachesi


    Full Text Available Earthquake parameters determination from macroseismic data is a procedure, the reliability of whose results can be impaired by many problems related to quality, number and distribution of data. Such problems are common with ancient, sketchily documented events, but can affect even comparatively recent earthquakes. This paper presents some cases of Central Italy earthquakes, the determination of whose epicentral parameters involved problems of reliability. Not all problems can ever be completely solved. It is therefore necessary to devise ways for putting on record the uncertainty of the resulting parameters, so that future users can be aware of them.

  14. Interrater reliability: completing the methods description in medical records review studies.

    Yawn, Barbara P; Wollan, Peter


    In medical records review studies, information on the interrater reliability (IRR) of the data is seldom reported. This study assesses the IRR of data collected for a complex medical records review study. Elements selected for determining IRR included "demographic" data that require copying explicit information (e.g., gender, birth date), "free-text" data that require identifying and copying (e.g., chief complaints and diagnoses), and data that require abstractor judgment in determining what to record (e.g., whether heart disease was considered). Rates of agreement were assessed by the greatest number of answers (one to all n) that were the same. The IRR scores improved over time. At 1 month, the reliability for demographic data elements was very good, for free-text data elements was good, but for data elements requiring abstractor judgment was unacceptable (only 3.4 of six answers agreed, on average). All assessments after 6 months showed very good to excellent IRR. This study demonstrates that IRR can be evaluated and summarized, providing important information to the study investigators and to the consumer for assessing the reliability of the data and therefore the validity of the study results and conclusions. IRR information should be required for all large medical records studies.

  15. Probing the effect of OSCE checklist length on inter-observer reliability and observer accuracy

    Katrina F. Hurley


    Full Text Available Purpose: The Objective Structured Clinical Examination (OSCE is a widely employed tool for measuring clinical competence. In the drive toward comprehensive assessment, OSCE stations and checklists may become increasingly complex. The objective of this study was to probe inter-observer reliability and observer accuracy as a function of OSCE checklist length. Method: Study participants included emergency physicians and senior residents in Emergency Medicine at Dalhousie University. Participants watched an identical series of four, scripted, standardized videos enacting 10-min OSCE stations and completed corresponding assessment checklists. Each participating observer was provided with a random combination of two 40-item and two 20-item checklists. A panel of physicians scored the scenarios through repeated video review to determine the ‘gold standard’ checklist scores. Results: Fifty-seven observers completed 228 assessment checklists. Mean observer accuracy ranged from 73 to 93% (14.6–18.7/20, with an overall accuracy of 86% (17.2/20, and inter-rater reliability range of 58–78%. After controlling for station and individual variation, no effect was observed regarding the number of checklist items on overall accuracy (p=0.2305. Consistency in ratings was calculated using intraclass correlation coefficient and demonstrated no significant difference in consistency between the 20- and 40-item checklists (ranged from 0.432 to 0.781, p-values from 0.56 to 0.73. Conclusions: The addition of 20 checklist items to a core list of 20 items in an OSCE assessment checklist does not appear to impact observer accuracy or inter-rater reliability.

  16. Records of solar eclipse observations in ancient China


    Like ancient people at other places of the world, the ancient Chinese lived in awe of the Sun. As they felt solar eclipses extremely significant events, they closely observed the occurrence of solar eclipse. Ancient astronomers further realized very early that solar eclipses were one of the important astronomical phenomena to revise and improve the ancient calendar. Interestingly, ancient emperors regarded solar eclipses as warnings from heaven that might affect the stability of their throne. Consequently, observing and recording solar eclipses became official, which dated far back to ancient China when numerous relevant descriptions were recorded in historical books. These records contribute substantially to China as an ancient civilization, as well as to the research of the long-term variation of the rotation rate of the Earth during >2000 years before the 17th century. This paper briefly reviews the perception, observations and recording of solar eclipses by ancient Chinese astronomers.

  17. Records of solar eclipse observations in ancient China

    HAN YanBen; QIAO OiYuan


    Like ancient people at other places of the world, the ancient Chinese lived in awe of the Sun. As they felt solar eclipses extremely significant events, they closely observed the occurrence of solar eclipse. Ancient astronomers further realized very early that solar eclipses were one of the important astro-nomical phenomena to revise and improve the ancient calendar. Interestingly, ancient emperors re-garded solar eclipses as warnings from heaven that might affect the stability of their throne. Conse-quently, observing and recording solar eclipses became official, which dated far back to ancient China when numerous relevant descriptions were recorded in historical books. These records contribute substantially to China as an ancient civilization, as well as to the research of the long-term variation of the rotation rate of the Earth during >2000 years before the 17th century. This paper briefly reviews the perception, observations and recording of solar eclipses by ancient Chinese astronomers.

  18. A digital video system for observing and recording occultations

    Barry, M A; Pavlov, Hristo; Hanna, William; McEwan, Alistair; Filipovic, Miroslav


    Stellar occultations by asteroids and outer solar system bodies can offer ground based observers with modest telescopes and camera equipment the opportunity to probe the shape, size, atmosphere and attendant moons or rings of these distant objects. The essential requirements of the camera and recording equipment are: good quantum efficiency and low noise, minimal dead time between images, good horological faithfulness of the image time stamps, robustness of the recording to unexpected failure, and low cost. We describe the Astronomical Digital Video occultation observing and recording System (ADVS) which attempts to fulfil these requirements and compare the system with other reported camera and recorder systems. Five systems have been built, deployed and tested over the past three years, and we report on three representative occultation observations: one being a 9 +/-1.5 second occultation of the trans-Neptunian object 28978 Ixion (mv=15.2) at 3 seconds per frame, one being a 1.51 +/-0.017 second occultation ...

  19. Once is not enough : Establishing reliability criteria for teacher evaluation based on classroom observations

    van der Lans, Rikkert; van de Grift, Wim; van Veen, Klaas


    Classroom observation is the most implemented method to evaluate teaching. To ensure reliability, researchers often train observers extensively. However, schools have limited resources to train observers and often lesson observation is performed by limitedly trained or untrained colleagues. In this

  20. Inter- and intra-observer reliability of experienced and inexperienced observers for the Qualitative Behaviour Assessment in dairy cattle

    Bokkers, E.A.M.; Vries, de M.; Antonissen, I.C.M.A.; Boer, de I.J.M.


    Qualitative Behaviour Assessment (QBA) is part of the Welfare Quality® protocol for dairy cattle, although its inter- and intra-observer reliability have not been reported. This study evaluated inter- and intra-observer reliability of the QBA for dairy cattle in experienced and inexperienced observe

  1. Establishing the reliability of rhesus macaque social network assessment from video observations.

    Feczko, Eric; Mitchell, Thomas A J; Walum, Hasse; Brooks, Jenna M; Heitz, Thomas R; Young, Larry J; Parr, Lisa A


    Understanding the properties of a social environment is important for understanding the dynamics of social relationships. Understanding such dynamics is relevant for multiple fields, ranging from animal behaviour to social and cognitive neuroscience. To quantify social environment properties, recent studies have incorporated social network analysis. Social network analysis quantifies both the global and local properties of a social environment, such as social network efficiency and the roles played by specific individuals, respectively. Despite the plethora of studies incorporating social network analysis, methods to determine the amount of data necessary to derive reliable social networks are still being developed. Determining the amount of data necessary for a reliable network is critical for measuring changes in the social environment, for example following an experimental manipulation, and therefore may be critical for using social network analysis to statistically assess social behaviour. In this paper, we extend methods for measuring error in acquired data and for determining the amount of data necessary to generate reliable social networks. We derived social networks from a group of 10 male rhesus macaques, Macaca mulatta, for three behaviours: spatial proximity, grooming and mounting. Behaviours were coded using a video observation technique, where video cameras recorded the compound where the 10 macaques resided. We collected, coded and used 10 h of video data to construct these networks. Using the methods described here, we found in our data that 1 h of spatial proximity observations produced reliable social networks. However, this may not be true for other studies due to differences in data acquisition. Our results have broad implications for measuring and predicting the amount of error in any social network, regardless of species.

  2. Fast and reliable identification of axons, axon initial segments and dendrites with local field potential recording

    Anders Victor ePetersen


    Full Text Available The axon initial segment (AIS is an essential neuronal compartment. It is usually where action potentials are initiated. Recent studies demonstrated that the AIS is a plastic structure that can be regulated by neuronal activity and by the activation of metabotropic receptors. Studying the AIS in live tissue can be difficult because its identification is not always reliable. Here we provide a new technique allowing a fast and reliable identification of the AIS in live brain slice preparations. By simultaneous recoding of extracellular local field potentials and whole-cell patch-clamp recording of neurons, we can detect sinks caused by inward currents flowing across the membrane. We determine the location of the AIS by comparing the timing of these events with the action potential. We demonstrate that this method allows the unequivocal identification of the AIS of different types of neurons from the brain.

  3. Interval Estimation of Stress-Strength Reliability Based on Lower Record Values from Inverse Rayleigh Distribution

    Bahman Tarvirdizade


    Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.

  4. Ad hoc procedure for optimising agreement between observational records

    Javier Arana

    Full Text Available Observational studies in the field of sport are complicated by the added difficulty of having to analyse multiple, complex events or behaviours that may last just a fraction of a second. In this study, we analyse three aspects related to the reliability of data collected in such a study. The first aim was to analyse and compare the reliability of data sets assessed quantitatively (calculation of kappa statistic and qualitatively (consensus agreement method. The second aim was to describe how, by ensuring the alignment of events, we calculated the kappa statistic for the order parameter using SDIS-GSEQ software (version 5.1 for data sets containing different numbers of sequences. The third objective was to describe a new consultative procedure designed to remove the confusion generated by discordant data sets and improve the reliability of the data. The procedure is called "consultative" because it involves the participation of a new observer who is responsible for consulting the existing observations and deciding on the definitive result.

  5. Early Parent-infant Interactions; Are Health Visitors' Observations Reliable?

    Kristensen, Ingeborg Hedegaard; Simonsen, Marianne; Trillingsgaard, Tea


    visitors working in the area. The study population consisted of 121 health visitors, 36 had a standardized parenting program education (certified Marte Meo therapists) and 85 had no standardized parenting program education. Measures: A self-reported questionnaire assessing intention, self...... to the Infant CARE-Index. Health visitors individually reviewed each video twice august 2013. Data analyzed in STATA estimating frequencies, associations and comparing answers from the two groups of health visitors. Both groups had high intentions and self-efficacy according to working with parent...... with improved outcomes for parental and infant health. Keywords Parent-infant interaction, health visitor, observation skills, Infant CARE-Index, Marte Meo method....

  6. Reliability of surface electromyographic recordings during walking in individuals with knee osteoarthritis.

    Hubley-Kozey, Cheryl L; Robbins, Shawn M; Rutherford, Derek J; Stanish, William D


    To determine test-retest reliability of a surface electromyographic protocol designed to measure knee joint muscle activation during walking in individuals with knee osteoarthritis (OA). Twenty-one individuals with moderate medial compartment knee OA completed two gait data collections separated by approximately 1month. Using a standardized protocol, surface electromyograms from rectus femoris plus lateral and medial sites for the gastrocnemii, vastii and hamstring muscles were recorded during walking. After full-wave rectification and low pass filtering, time and amplitude normalized (percent of maximum) waveforms were calculated. Principal component analysis (PP-scores) and co-contraction indices (CCI) were calculated from the waveforms. Intraclass correlation coefficients (ICC2,k) were calculated for PP-scores and CCI's. No differences in walking speed, knee muscle strength and symptoms were found between visits (p>0.05). The majority of PP-scores (17 of 21) and two of four CCIs demonstrated ICC2,k values greater than 0.81. Remaining PP-scores and CCIs had ICC2,k values between 0.61 and 0.80. The results support that reliable EMG characteristics can be captured from a moderate knee OA patient population using a standardized protocol.

  7. Reliability and criterion validity of an observation protocol for working technique assessments in cash register work.

    Palm, Peter; Josephson, Malin; Mathiassen, Svend Erik; Kjellberg, Katarina


    We evaluated the intra- and inter-observer reliability and criterion validity of an observation protocol, developed in an iterative process involving practicing ergonomists, for assessment of working technique during cash register work for the purpose of preventing upper extremity symptoms. Two ergonomists independently assessed 17 15-min videos of cash register work on two occasions each, as a basis for examining reliability. Criterion validity was assessed by comparing these assessments with meticulous video-based analyses by researchers. Intra-observer reliability was acceptable (i.e. proportional agreement >0.7 and kappa >0.4) for 10/10 questions. Inter-observer reliability was acceptable for only 3/10 questions. An acceptable inter-observer reliability combined with an acceptable criterion validity was obtained only for one working technique aspect, 'Quality of movements'. Thus, major elements of the cashiers' working technique could not be assessed with an acceptable accuracy from short periods of observations by one observer, such as often desired by practitioners. Practitioner Summary: We examined an observation protocol for assessing working technique in cash register work. It was feasible in use, but inter-observer reliability and criterion validity were generally not acceptable when working technique aspects were assessed from short periods of work. We recommend the protocol to be used for educational purposes only.

  8. Reliability of the recording of schizophrenia and depressive disorder in the Saskatchewan health care datafiles.

    Rawson, N S; Malcolm, E; D'Arcy, C


    Administrative data have long been used in psychiatric epidemiology and outcomes evaluation. This article examines the reliability of the recording of schizophrenia and depressive disorder in three Saskatchewan administrative health care utilization datafiles. Due to their comprehensive nature, these datafiles have been used in a wide range of epidemiologic studies. Close agreement was found between hospital computer data and patients' charts for personal and demographic factors (> or = 94.7%). Diagnostic concordance between computerized hospital data and medical charts was very good for schizophrenia (94%) but poor for depressive disorder (58%). Appropriate physician services were identified for 60% and 72% of hospital discharges for schizophrenia and depressive disorder, respectively, and exact diagnostic agreement between hospital and physician datafiles was 62% for schizophrenia and 66% for depressive disorder. Appropriate provincial mental health branch services were found for 83% and 38% of hospital discharges for schizophrenia and depressive disorder, respectively; exact diagnostic concordance between these datafiles was 75% for schizophrenia and 0% for depressive disorder. A significant number of patients with major or neurotic depression appeared to be wrongly coded as having depressive disorder in the hospital file. The differences in diagnostic agreement may also be partly a function of how the two conditions are differentially treated in the health system. These findings suggest that more specific and severe psychiatric diagnoses are likely to be recorded accurately and consistently in the Saskatchewan datafiles. However, disorders with multiple manifestations or those for which there are several possible codes should be examined with caution and ways sought to validate them. Attention should also be paid to which service sectors are involved in the treatment of specific disorders.

  9. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation.

    Chen, Qing; Zhang, Jinxiu; Hu, Ze


    This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  10. Evaluating the Reliability of Reanalysis as a Substitute for Observational Data in Large-scale Agricultural Assessments

    Glotter, M.; Ruane, A. C.; Moyer, E. J.; Elliott, J. W.


    Future projections of food security require historical agricultural assessments to validate, improve, and understand the limitations of yield estimates. Poor observational climate networks often force historical assessments to rely on reanalysis data- climate model output nudged by observations- for inputs to crop models. However, agricultural yields are sensitive to changes in precipitation, and since reanalysis products generally use little or no observational precipitation in the data assimilation process, its use may compromise the validation exercise. Previous studies do not systematically assess whether reanalysis data is sufficient or data measurements are required. We test the reliability of reanalysis data for agricultural analyses with simulations of maize yields in the U.S., where observational data are extensive. We drive the widely used Decision Support System for Agrotechnology Transfer (DSSAT) crop model with climate inputs from a combination of data sources: bias- and unbias-corrected reanalyses, and observation-based precipitation and solar radiation. We find that driving DSSAT with reanalysis precipitation produces unreliable yield estimates, but driving it with reanalysis bias-corrected with monthly observations is more robust. Bias corrections do require observational data, but gathering reliable monthly data may be easier than gathering daily data. The approach is therefore promising for data-poor regions where observational precipitation is less available and existing data is unreliable. The priority for climate monitoring networks may not be in daily records but instead in lower-cost observational systems that estimate data over coarser temporal resolutions.

  11. Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials

    Kushary, Debashis; Kulkarni, Pandurang M.


    We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.

  12. Observational Assessment of Preschool Disruptive Behavior, Part I: reliability of the Disruptive Behavior Diagnostic Observation Schedule (DB-DOS).

    Wakschlag, Lauren S; Hill, Carri; Carter, Alice S; Danis, Barbara; Egger, Helen L; Keenan, Kate; Leventhal, Bennett L; Cicchetti, Domenic; Maskowitz, Katie; Burns, James; Briggs-Gowan, Margaret J


    To examine the reliability of the Disruptive Behavior Diagnostic Observation Schedule (DB-DOS), a new observational method for assessing preschool disruptive behavior. The DB-DOS is a structured clinic-based assessment designed to elicit clinically salient behaviors relevant to the diagnosis of disruptive behavior in preschoolers. Child behavior is assessed in three interactional contexts that vary by partner (parent versus examiner) and level of support provided. Twenty-one disruptive behaviors are coded within two domains: problems in Behavioral Regulation and problems in Anger Modulation. A total of 364 referred and nonreferred preschoolers participated: interrater reliability and internal consistency were assessed on a primary sample (n = 335) and test-retest reliability was assessed in a separate sample (n = 29). The DB-DOS demonstrated good interrater and test-retest reliability. Confirmatory factor analysis demonstrated an excellent fit of the DB-DOS multidomain model of disruptive behavior. The DB-DOS is a reliable observational tool for clinic-based assessment of preschool disruptive behavior. This standardized assessment method holds promise for advancing developmentally sensitive characterization of preschool psychopathology.

  13. Inter- and intra-observer reliability of clinical movement-control tests for marines

    Monnier Andreas


    Full Text Available Abstract Background Musculoskeletal disorders particularly in the back and lower extremities are common among marines. Here, movement-control tests are considered clinically useful for screening and follow-up evaluation. However, few studies have addressed the reliability of clinical tests, and no such published data exists for marines. The present aim was therefore to determine the inter- and intra-observer reliability of clinically convenient tests emphasizing movement control of the back and hip among marines. A secondary aim was to investigate the sensitivity and specificity of these clinical tests for discriminating musculoskeletal pain disorders in this group of military personnel. Methods This inter- and intra-observer reliability study used a test-retest approach with six standardized clinical tests focusing on movement control for back and hip. Thirty-three marines (age 28.7 yrs, SD 5.9 on active duty volunteered and were recruited. They followed an in-vivo observation test procedure that covered both low- and high-load (threshold tasks relevant for marines on operational duty. Two independent observers simultaneously rated performance as “correct” or “incorrect” following a standardized assessment protocol. Re-testing followed 7–10 days thereafter. Reliability was analysed using kappa (κ coefficients, while discriminative power of the best-fitting tests for back- and lower-extremity pain was assessed using a multiple-variable regression model. Results Inter-observer reliability for the six tests was moderate to almost perfect with κ-coefficients ranging between 0.56-0.95. Three tests reached almost perfect inter-observer reliability with mean κ-coefficients > 0.81. However, intra-observer reliability was fair-to-moderate with mean κ-coefficients between 0.22-0.58. Three tests achieved moderate intra-observer reliability with κ-coefficients > 0.41. Combinations of one low- and one high-threshold test best discriminated

  14. Assessment of disabilities in stroke patients with apraxia: internal consistency and inter-observer reliability.

    Heugten, C.M. van; Dekker, J.; Deelman, B.G.; Stehmann-Saris, J.C.; Kinebanian, A.


    In this paper the internal consistency and inter-observer reliability of the assessment of disabilities in stroke patients with apraxia is presented. Disabilities were assessed by means of observation of activities of daily living (ADL). The study was conducted at occupational therapy departments in

  15. Assessment of disabilities in stroke patients with apraxia : Internal consistency and inter-observer reliability

    van Heugten, CM; Dekker, J; Deelman, BG; Stehmann-Saris, JC; Kinebanian, A


    In this paper the internal consistency and inter-observer reliability of the assessment of disabilities in stroke patients with apraxia is presented. Disabilities were assessed by means of observation of activities of daily living (ADL). The study was conducted at occupational therapy departments in

  16. Assessment of disabilities in stroke patients with apraxia : Internal consistency and inter-observer reliability

    van Heugten, CM; Dekker, J; Deelman, BG; Stehmann-Saris, JC; Kinebanian, A


    In this paper the internal consistency and inter-observer reliability of the assessment of disabilities in stroke patients with apraxia is presented. Disabilities were assessed by means of observation of activities of daily living (ADL). The study was conducted at occupational therapy departments in

  17. Inter-rater reliability of two depression rating scales, MADRS and DRRS, based on videotape records of structured interviews.

    Corruble, E; Purper, D; Payan, C; Guelfi, J


    The inter-rater reliability of the French versions of the MADRS and the DRRS was studied on the basis of 58 videotape records of structured standardised interviews of depressed inpatients under antidepressant treatment. Each patient was assessed by two trained raters, from the same videotape recording. The inter-rater reliability of total scores was high with both scales (intra-class correlation coefficients: 0.86 for MADRS and 0.77 for DRRS). However, the inter-rater reliability for individual items was higher and more homogeneous for the MADRS than for the DRRS. Finally, the structured interview in French appears to be relevant for the MADRS, but it should be improved for the DRRS.

  18. Half a degree difference in the observational record

    Schleussner, Carl-Friedrich; Fischer, Erich; Pfleiderer, Peter


    Assessing the impacts of climate change at different levels of warming is key requirement to inform debates on climate policy in a post-Paris world. In particular, the forthcoming IPCC special report on 1.5°C is tasked to assess warming at 1.5°C compared to other levels such as 2°C or present day warming around 1°C. Assessments of such differences are hampered by uncertainties of model projections in particular related to impact relevant quantities such as extreme weather events that may mask existing differences in projections. Evidence from the observational record can provide useful information to inform the debate about differentiable climate impacts in the light of uncertainty. Here we assess the difference between extreme weather indicators from observational datasets for 0.5°C warming between the second half of the 20th century and the recent past. We report discernible differences for the global occurrence of heat extremes and extreme precipitation. Limitations of this approach related to non-greenhouse gas forcings are also discussed.

  19. Uncertainty information in climate data records from Earth observation

    Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang


    The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the

  20. Reliability and utility of the Acute Care Index of Function in intensive care patients: An observational study.

    Bissett, Bernie; Green, Margot; Marzano, Vince; Byrne, Susannah; Leditschke, I Anne; Neeman, Teresa; Boots, Robert; Paratz, Jennifer


    To establish the inter-rater reliability of the Acute Care Index of Function (ACIF) in intensive care unit (ICU) patients and determine whether ACIF scores have predictive utility beyond ICU discharge. Accurate and reliable measures of physical function are required to describe the recovery trajectory of ICU survivors. The clinimetric properties of the ACIF are yet to be established in ICU patients. Prospective observational study in a single tertiary ICU. ACIF scores were recorded independently by 2 physiotherapists across a convenience sample of 100 physiotherapy assessments, and at ICU discharge. Inter-rater reliability of total ACIF scores was very strong (ICC = 0.94). ACIF ICU discharge predicted hospital discharge to a destination other than home (area under ROC = 0.79, 95% CI 0.64-0.89) (sensitivity 0.78). The ACIF has excellent inter-rater reliability in ICU patients and scores at ICU discharge predict the likelihood of discharge home. ACTRN12614001008617 (September 18 2014). Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Implications in adjusting a gravity network with observations medium or independent: analysis of precision and reliability

    Pedro L. Faggion


    Full Text Available Adjustment strategies associated to the methodology applied used to the implantation of a gravity network of high precision in Paraná are presented. A network was implanted with stations in 21 places in the State of Paraná and one in the state of São Paulo To reduce the risk of the losing of points of that gravity network, they were established on the points of the GPS High Precision Network of Paraná, which possess a relatively homogeneous geographical distribution. For each one of the gravity lines belonging to the loops implanted for the network, it was possible to obtain three or six observations. In the first strategy, of adjustment investigated, for the net, it was considered, as observation, the medium value of the observations obtained for each gravity line. In the second strategy, of the adjustment, the observations were considered independent. The comparison of those strategies revealed that only the precision criteria is not enough to indicate the great solution of a gravity network. It was verified that there is need to use an additional criterion for analysis of the adjusted solution of the network, besides the precision criteria. The reliability criterion for geodesic networks, which becomes separated in reliability internal and external reliability it was used. The reliability internal it was used to verify the rigidity with which the network reacts in the detection and quantification of existent gross errors in the observations, and the reliability external in the quantification of the influence on the adjusted parameters of the errors non located. They are presented the aspects that differentiate the obtained solutions, when they combine the precision criteria and reliability criteria in the analysis of the quality of a gravity network.

  2. Live versus Video Observations: Comparing the Reliability and Validity of Two Methods of Assessing Classroom Quality

    Curby, Timothy W.; Johnson, Price; Mashburn, Andrew J.; Carlis, Lydia


    When conducting classroom observations, researchers are often confronted with the decision of whether to conduct observations live or by using pre-recorded video. The present study focuses on comparing and contrasting observations of live and video administrations of the Classroom Assessment Scoring System-PreK (CLASS-PreK). Associations between…

  3. Nitrogen isotopes in bulk marine sediment: linking seafloor observations with subseafloor records

    J.-E. Tesdal


    Full Text Available The stable isotopes of nitrogen offer a unique perspective on changes in the nitrogen cycle, past and present. However, the presence of multiple forms of nitrogen in marine sediments can complicate the interpretation of bulk nitrogen isotope measurements. Although the large-scale global patterns of seafloor δ15N have been shown to match process-based expectations, small-scale heterogeneity on the seafloor, or alterations of isotopic signals during translation into the subseafloor record, could obscure the primary signals. Here, a public database of nitrogen isotope measurements is described, including both seafloor and subseafloor sediment samples ranging in age from modern to the Pliocene, and used to assess these uncertainties. In general, good agreement is observed between neighbouring seafloor sites within a 100 km radius, with 85% showing differences of < 1‰. There is also a good correlation between the δ15N of the shallowest (< 5 ka subseafloor sediments and neighbouring seafloor sites within a 100 km radius (R2 = 0.83, which suggests a reliable translation of sediments into the buried sediment record. Meanwhile, gradual δ15N decreases over multiple glacial–interglacial cycles appear to reflect post-depositional alteration in records from the deep sea (below 2000 m. We suggest a simple conceptual model to explain these 100-kyr-timescale changes in well-oxygenated, slowly accumulating sediments, which calls on differential loss rates for pools of organic N with different δ15N. We conclude that bulk sedimentary nitrogen isotope records are reliable monitors of past changes in the marine nitrogen cycle at most locations, and could be further improved with a better understanding of systematic post-depositional alteration. Furthermore, geochemical or environmental criteria should be developed in order to effectively identify problematic locations and to account for

  4. [A systematic social observation tool: methods and results of inter-rater reliability].

    Freitas, Eulilian Dias de; Camargos, Vitor Passos; Xavier, César Coelho; Caiaffa, Waleska Teixeira; Proietti, Fernando Augusto


    Systematic social observation has been used as a health research methodology for collecting information from the neighborhood physical and social environment. The objectives of this article were to describe the operationalization of direct observation of the physical and social environment in urban areas and to evaluate the instrument's reliability. The systematic social observation instrument was designed to collect information in several domains. A total of 1,306 street segments belonging to 149 different neighborhoods in Belo Horizonte, Minas Gerais, Brazil, were observed. For the reliability study, 149 segments (1 per neighborhood) were re-audited, and Fleiss kappa was used to access inter-rater agreement. Mean agreement was 0.57 (SD = 0.24); 53% had substantial or almost perfect agreement, and 20.4%, moderate agreement. The instrument appears to be appropriate for observing neighborhood characteristics that are not time-dependent, especially urban services, property characterization, pedestrian environment, and security.

  5. Validity and inter-observer reliability of subjective hand-arm vibration assessments.

    Coenen, Pieter; Formanoy, Margriet; Douwes, Marjolein; Bosch, Tim; de Kraker, Heleen


    Exposure to mechanical vibrations at work (e.g., due to handling powered tools) is a potential occupational risk as it may cause upper extremity complaints. However, reliable and valid assessment methods for vibration exposure at work are lacking. Measuring hand-arm vibration objectively is often difficult and expensive, while often used information provided by manufacturers lacks detail. Therefore, a subjective hand-arm vibration assessment method was tested on validity and inter-observer reliability. In an experimental protocol, sixteen tasks handling powered tools were executed by two workers. Hand-arm vibration was assessed subjectively by 16 observers according to the proposed subjective assessment method. As a gold standard reference, hand-arm vibration was measured objectively using a vibration measurement device. Weighted κ's were calculated to assess validity, intra-class-correlation coefficients (ICCs) were calculated to assess inter-observer reliability. Inter-observer reliability of the subjective assessments depicting the agreement among observers can be expressed by an ICC of 0.708 (0.511-0.873). The validity of the subjective assessments as compared to the gold-standard reference can be expressed by a weighted κ of 0.535 (0.285-0.785). Besides, the percentage of exact agreement of the subjective assessment compared to the objective measurement was relatively low (i.e., 52% of all tasks). This study shows that subjectively assessed hand-arm vibrations are fairly reliable among observers and moderately valid. This assessment method is a first attempt to use subjective risk assessments of hand-arm vibration. Although, this assessment method can benefit from some future improvement, it can be of use in future studies and in field-based ergonomic assessments.

  6. A Practical Solution to Optimizing the Reliability of Teaching Observation Measures under Budget Constraints

    Meyer, J. Patrick; Liu, Xiang; Mashburn, Andrew J.


    Researchers often use generalizability theory to estimate relative error variance and reliability in teaching observation measures. They also use it to plan future studies and design the best possible measurement procedures. However, designing the best possible measurement procedure comes at a cost, and researchers must stay within their budget…

  7. Measurement of transplanted pancreatic volume using computed tomography: reliability by intra- and inter-observer variability

    Lundqvist, Eva; Segelsjoe, Monica; Magnusson, Anders [Uppsala Univ., Dept. of Radiology, Oncology and Radiation Science, Section of Radiology, Uppsala (Sweden)], E-mail:; Andersson, Anna; Biglarnia, Ali-Reza [Dept. of Surgical Sciences, Section of Transplantation Surgery, Uppsala Univ. Hospital, Uppsala (Sweden)


    Background Unlike other solid organ transplants, pancreas allografts can undergo a substantial decrease in baseline volume after transplantation. This phenomenon has not been well characterized, as there are insufficient data on reliable and reproducible volume assessments. We hypothesized that characterization of pancreatic volume by means of computed tomography (CT) could be a useful method for clinical follow-up in pancreas transplant patients. Purpose To evaluate the feasibility and reliability of pancreatic volume assessment using CT scan in transplanted patients. Material and Methods CT examinations were performed on 21 consecutive patients undergoing pancreas transplantation. Volume measurements were carried out by two observers tracing the pancreatic contours in all slices. The observers performed the measurements twice for each patient. Differences in volume measurement were used to evaluate intra- and inter-observer variability. Results The intra-observer variability for the pancreatic volume measurements of Observers 1 and 2 was found to be in almost perfect agreement, with an intraclass correlation coefficient (ICC) of 0.90 (0.77-0.96) and 0.99 (0.98-1.0), respectively. Regarding inter-observer validity, the ICCs for the first and second measurements were 0.90 (range, 0.77-0.96) and 0.95 (range, 0.85-0.98), respectively. Conclusion CT volumetry is a reliable and reproducible method for measurement of transplanted pancreatic volume.

  8. Development and Reliability Testing of a Fast-Food Restaurant Observation Form.

    Rimkus, Leah; Ohri-Vachaspati, Punam; Powell, Lisa M; Zenk, Shannon N; Quinn, Christopher M; Barker, Dianne C; Pugach, Oksana; Resnick, Elissa A; Chaloupka, Frank J


    To develop a reliable observational data collection instrument to measure characteristics of the fast-food restaurant environment likely to influence consumer behaviors, including product availability, pricing, and promotion. The study used observational data collection. Restaurants were in the Chicago Metropolitan Statistical Area. A total of 131 chain fast-food restaurant outlets were included. Interrater reliability was measured for product availability, pricing, and promotion measures on a fast-food restaurant observational data collection instrument. Analysis was done with Cohen's κ coefficient and proportion of overall agreement for categorical variables and intraclass correlation coefficient (ICC) for continuous variables. Interrater reliability, as measured by average κ coefficient, was .79 for menu characteristics, .84 for kids' menu characteristics, .92 for food availability and sizes, .85 for beverage availability and sizes, .78 for measures on the availability of nutrition information,.75 for characteristics of exterior advertisements, and .62 and .90 for exterior and interior characteristics measures, respectively. For continuous measures, average ICC was .88 for food pricing measures, .83 for beverage prices, and .65 for counts of exterior advertisements. Over 85% of measures demonstrated substantial or almost perfect agreement. Although some measures required revision or protocol clarification, results from this study suggest that the instrument may be used to reliably measure the fast-food restaurant environment.

  9. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation

    Chen, Qing; Zhang, Jinxiu; Hu, Ze


    This article investigates the dynamic topology control problem of satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime. PMID:28241474

  10. Development of a peer review system using patient records for outcome evaluation of medical education: reliability analysis.

    Kameoka, Junichi; Okubo, Tomoya; Koguma, Emi; Takahashi, Fumie; Ishii, Seiichi; Kanatsuka, Hiroshi


    In addition to input evaluation (education delivered at school) and output evaluation (students' capability at graduation), the methods for outcome evaluation (performance after graduation) of medical education need to be established. One approach is a review of medical records, which, however, has been met with difficulties because of poor inter-rater reliability. Here, we attempted to develop a peer review system of medical records with high inter-rater reliability. We randomly selected 112 patients (and finally selected 110 after removing two ineligible patients) who visited (and were hospitalized in) one of the four general hospitals in the Tohoku region of Japan between 2008 and 2012. Four reviewers, who were well-trained general internists from outside the Tohoku region, visited the hospitals independently and evaluated outpatient medical records based on an evaluation sheet that consisted of 14 items (3-point scale) for record keeping and 15 items (5-point scale) for quality of care. The mean total score was 84.1 ± 7.7. Cronbach's alpha for these items was 0.798. Single measure and average measure intraclass correlations for the reviewers were 0.733 (95% confidence interval: 0.720-0.745) and 0.917 (95% confidence interval: 0.912-0.921), respectively. An exploratory factor analysis revealed six factors: history taking, physical examination, clinical reasoning, management and outcome, rhetoric, and patient relationship. In conclusion, we have developed a peer review system of medical records with high inter-rater reliability, which may enable us, with further validity analysis, to measure quality of patient care as an outcome evaluation of medical education in the future.

  11. Reliability and validity of head posture assessment by observation and a four-category scale.

    Silva, Anabela G; Punt, T David; Johnson, Mark I


    Head posture (HP) is assessed as part of the clinical examination of patients with neck pain using observation and qualitative descriptors. In research, HP is characterised through the measurement of angles and distances between anatomical landmarks. This study investigated whether the assessment of HP as performed in clinical practice is reliable and valid. Ten physiotherapists assessed forward HP, head extension and side-flexion from images of 40 individuals with and without previous experience of neck pain using a four-category scale. The assessment was repeated twice with a 1-week gap. Physiotherapists' ratings were then compared with angular measurements of the same components of HP. K values for intra-rater reliability varied between 0.22 and 0.81 for forward HP, between 0.19 and 0.69 for head extension and between 0.38 and 0.67 for side-flexion. K values for inter-rater reliability were 0.02 for forward HP, 0.07 for head extension and 0.19 for side-flexion. Correlation coefficients between the ratings and the angular measurements varied between -0.16 and -0.49 for forward HP, between -0.17 and 0.68 for head extension and between -0.04 and 0.37 for side-flexion. The assessment of HP by observation and a four-category scale showed poor reliability and validity.

  12. Reliability and validity of the Pragmatics Observational Measure (POM): a new observational measure of pragmatic language for children.

    Cordier, Reinie; Munro, Natalie; Wilkes-Gillan, Sarah; Speyer, Renée; Pearce, Wendy M


    There is a need for a reliable and valid assessment of childhood pragmatic language skills during peer-peer interactions. This study aimed to evaluate the psychometric properties of a newly developed pragmatic assessment, the Pragmatic Observational Measure (POM). The psychometric properties of the POM were investigated from observational data of two studies - study 1 involved 342 children aged 5-11 years (108 children with ADHD; 108 typically developing playmates; 126 children in the control group), and study 2 involved 9 children with ADHD who attended a 7-week play-based intervention. The psychometric properties of the POM were determined based on the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) taxonomy of psychometric properties and definitions for health-related outcomes; the Pragmatic Protocol was used as the reference tool against which the POM was evaluated. The POM demonstrated sound psychometric properties in all the reliability, validity and interpretability criteria against which it was assessed. The findings showed that the POM is a reliable and valid measure of pragmatic language skills of children with ADHD between the age of 5 and 11 years and has clinical utility in identifying children with pragmatic language difficulty.

  13. Reliability of an external loop recorder for automatic recognition and transtelephonic ECG transmission of atrial fibrillation.

    Müller, Axel; Scharner, Wilfried; Borchardt, Tilo; Och, Wolfgang; Korb, Harald


    In order to test a newly developed algorithm for detecting atrial fibrillation in clinical practice, we carried out parallel recordings using a conventional 24-h electrocardiogram (ECG) monitor and telemonitoring with an external loop recorder. Recordings were made in 24 patients with persistent atrial fibrillation and in another 24 patients with sinus rhythm. Atrial fibrillation was detected immediately in 23 of 24 patients with persistent atrial fibrillation and 20 min after fitting the single-channel loop recorder in the 24th patient (sensitivity 100%). On average, 3.1 false positives (i.e. detection of an episode, including the end or beginning of atrial fibrillation) were transmitted per patient. The sensitivity of the algorithms for automatically detecting bradycardiac and tachycardiac atrial fibrillation was also high. In 12 of 24 patients with sinus rhythm, false-positive tele-ECGs were transmitted. These were caused by supraventricular or ventricular extrasystoles and by sinus arrhythmias or sinoatrial (SA) blocks. The external loop recorder was very effective at detecting paroxysmal atrial fibrillation. Possible indications for the clinical use of this recorder include, in addition to diagnosis, monitoring patients for atrial fibrillation recurrence after cardioversion or catheter ablation.

  14. Designing for reliable textile neonatal ECG monitoring using multi-sensor recordings.

    Bouwstra, S; Chen, W; Oetomo, S Bambang; Feijs, L M G; Cluitmans, P J M


    When designing an ECG monitoring system embedded with textile electrodes for comfort, it is challenging to ensure reliable monitoring, because textile electrodes suffer from motion artifacts and incidental poor signal quality. For the design of a comfortable monitoring system for prematurely born babies in the Neonatal Intensive Care Unit (NICU), we propose the concepts of 'diversity measurement' and 'context awareness' to improve reliability. Clinical multi-modal sensor data was collected in the NICU with the Smart Jacket connected to a state-of-the-art amplifier. We found that the ECG signals quality varied among sensors and varied over time, and found correlations between ECG signal, acceleration data, and context, which supports the feasibility of the concepts. Our explorative system level approach has lead to design parameters and meta-insights into the role of clinical validation in the design process.

  15. Observer Reliability of Three-Dimensional Cephalometric Landmark Identification on Cone-Beam CT

    de Oliveira, Ana Emilia F.; Cevidanes, Lucia Helena S.; Phillips, Ceib; Motta, Alexandre; Burke, Brandon; Tyndall, Donald


    Objective To evaluate reliability in 3D landmark identification using Cone-Beam CT. Study Design Twelve pre-surgery CBCTs were randomly selected from 159 orthognathic surgery patients. Three observers independently repeated three times the identification of 30 landmarks in the sagittal, coronal, and axial slices. A mixed effects ANOVA model estimated the Intraclass Correlations (ICC) and assessed systematic bias. Results The ICC was >0.9 for 86% of intra-observer assessments and 66% of inter-observer assessments. Only 1% of intra-observer and 3% of inter-observer coefficients were <0.45. The systematic difference among observers was greater in X and Z than in Y dimensions, but the maximum mean difference was quite small. Conclusion Overall, the intra- and inter-observer reliability was excellent. 3D landmark identification using CBCT can offer consistent and reproducible data, if a protocol for operator training and calibration is followed. This is particularly important for landmarks not easily specified in all three planes of space. PMID:18718796

  16. A method for the automated, reliable retrieval of publication-citation records.

    Derek Ruths

    Full Text Available BACKGROUND: Publication records and citation indices often are used to evaluate academic performance. For this reason, obtaining or computing them accurately is important. This can be difficult, largely due to a lack of complete knowledge of an individual's publication list and/or lack of time available to manually obtain or construct the publication-citation record. While online publication search engines have somewhat addressed these problems, using raw search results can yield inaccurate estimates of publication-citation records and citation indices. METHODOLOGY: In this paper, we present a new, automated method that produces estimates of an individual's publication-citation record from an individual's name and a set of domain-specific vocabulary that may occur in the individual's publication titles. Because this vocabulary can be harvested directly from a research web page or online (partial publication list, our method delivers an easy way to obtain estimates of a publication-citation record and the relevant citation indices. Our method works by applying a series of stringent name and content filters to the raw publication search results returned by an online publication search engine. In this paper, our method is run using Google Scholar, but the underlying filters can be easily applied to any existing publication search engine. When compared against a manually constructed data set of individuals and their publication-citation records, our method provides significant improvements over raw search results. The estimated publication-citation records returned by our method have an average sensitivity of 98% and specificity of 72% (in contrast to raw search result specificity of less than 10%. When citation indices are computed using these records, the estimated indices are within of the true value 10%, compared to raw search results which have overestimates of, on average, 75%. CONCLUSIONS: These results confirm that our method provides

  17. Test-retest reliability of concurrently recorded steady-state and somatosensory evoked potentials in somatosensory sustained spatial attention.

    Pang, Cheuk Yee; Mueller, Matthias M


    We investigated the test-retest reliability of sustained spatial attention modulation of steady-state somatosensory evoked potentials (SSSEPs) and the N140 component of the somatosensory evoked potentials (SEPs). Participants attended to one or both hands to perform a target detection task while concurrent mechanical vibrations were presented for 4500ms to both hands in two recording sessions. Results revealed that the amplitude and the attentional modulation of SSSEPs had high test-retest reliability, while the test-retest reliability for the N140 component was low. SSSEPs for stimuli with focused and divided attention had about the same amplitude. For the N140 component only the stimuli with focused attention were significantly enhanced. We found greater habituation effects for the N140 compared to SSSEP amplitudes but attentional modulation was unaffected in both signals. Given the great test-retest reliability of SSSEP amplitude modulation with attention, SSSEPs serve as an excellent tool for studying sustained spatial attention in somatosensation.

  18. Validity and Reliability of a New Measure of Nursing Experience With Unintended Consequences of Electronic Health Records.

    Gephart, Sheila M; Bristol, Alycia A; Dye, Judy L; Finley, Brooke A; Carrington, Jane M


    Unintended consequences of electronic health records represent undesired effects on individuals or systems, which may contradict initial goals and impact patient care. The purpose of this study was to determine the extent to which a new quantitative measure called the Carrington-Gephart Unintended Consequences of Electronic Health Record Questionnaire (CG-UCE-Q) was valid and reliable. Then, it was used to describe acute care nurses' experience with unintended consequences of electronic health records and relate them to the professional practice environment. Acceptable content validity was achieved for two rounds of surveys with nursing informatics experts (n = 5). Then, acute care nurses (n = 144) were recruited locally and nationally to complete the survey and describe the frequency with which they encounter unintended consequences in daily work. Principal component analysis with oblique rotation was applied to evaluate construct validity. Correlational analysis with measures of the professional practice environment and workarounds was used to evaluate convergent validity. Test-retest reliability was measured in the local sample (N = 68). Explanation for 63% of the variance across six subscales (patient safety, system design, workload issues, workarounds, technology barriers, and sociotechnical impact) supported construct validity. Relationships were significant between subscales for electronic health record-related threats to patient safety and low autonomy/leadership (P < .01), poor communication about patients (P < .01), and low control over practice (P < .01). The most frequent sources of unintended consequences were increased workload, interruptions that shifted tasks from the computer, altered workflow, and the need to duplicate data entry. Convergent validity of the CG-UCE-Q was moderately supported with both the context and processes of workarounds with strong relationships identified for when nurses perceived a block and altered process to work around it

  19. Observer Error when Measuring Safety-Related Behavior: Momentary Time Sampling versus Whole-Interval Recording

    Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.


    Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…

  20. Observer Error when Measuring Safety-Related Behavior: Momentary Time Sampling versus Whole-Interval Recording

    Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.


    Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…

  1. Recording of natural head position using stereophotogrammetry: a new technique and reliability study.

    Hsung, Tai-Chiu; Lo, John; Li, Tik-Shun; Cheung, Lim-Kwong


    The purpose of this study was to develop a technique to record physical references and orient digital mesh models to a natural head position using stereophotogrammetry (SP). The first step was to record the digital mesh model of a hanging reference board placed at the capturing position of the SP machine. The board was aligned to true vertical using a plumb bob. It also was aligned with a laser plane parallel to a hanging mirror, which was located at the center of the machine. The parameter derived from the digital mesh model of the board was used to adjust the roll, pitch, and yaw of the subsequent captures of patients' facial images. This information was valid until the next machine calibration. The board placement was repeatable, with standard deviations less than 0.1° for pitch and yaw angles and 0.15° for roll angles.

  2. Estimating Value of Congestion and of Reliability from Observation of Route Choice Behavior of Car Drivers

    Prato, Carlo Giacomo; Rasmussen, Thomas Kjær; Nielsen, Otto Anker


    both congestion and reliability terms. Results illustrated that the value of time and the value of congestion were significantly higher in the peak period because of possible higher penalties for drivers being late and consequently possible higher time pressure. Moreover, results showed...... that the marginal rate of substitution between travel time reliability and total travel time did not vary across periods and traffic conditions, with the obvious caveat that the absolute values were significantly higher for the peak period. Last, results showed the immense potential of exploiting the growing...... availability of large amounts of data from cheap and enhanced technology to obtain estimates of the monetary value of different travel time components from the observation of actual behavior, with arguably potential significant impact on the realism of large-scale models....

  3. Data Quality and Reliability Analysis of U.S. Marine Corps Ground Vehicle Maintenance Records


    order fulfillment. GCSS-MC reduces ordered parts status updates from six days to several minutes (Stone, 2009). This thesis focuses on the data...Corps inventory. Our study, however, focuses only on the MTVR data. The numbers of MTVR records provided by year are shown in Table 4 and Table 5...SUSP SUSPENSION SYSTEM 71 RPR REPAIR F SUSP N 260 F SUSP SUSPENSION SYSTEM NOT GIVEN G02 TRAC N 67 G TRAC TRACK  CRAWLER  SYSTEM 2 BRK BRAKE SYSTEMS

  4. Reliability of geomagnetic secular variations recorded in a loess section at Lingtai,north-central China


    An investigation of the rock magnetic properties using stepwise isothermal remanence (IRM) acquisition,thermomagnetic analysis and temperature-dependent susceptibility history,identifies magnetite as the carrier of the main fraction of the remanence,associated with maghemite and hematite in Malan loess (L1),Holocene soil (S0) and last-glacial paleosol (S1).The presence of short-lived direction fluctuations indicates that no significant smoothing occurs in L1 when its remanence is locked,and thus L1 is capable of recording the geomagnetic secular variation (PSV),while the PSV has been severely smoothed or wiped out by pedogenic processes during S1 formation.It has been suggested that the Mono Lake and Laschamp excursions are two independent geomagnetic events based on this study.

  5. Reliability of geomagnetic secular variations recorded in a loess section at Lingtai, north-central China

    朱日祥; 郭斌; 潘永信; 刘青松; A.Zeman; V.Suchy


    An investigation of the rock magnetic properties using stepwise isothermal remanence (IRM) acquisition, thermomagnetic analysis and temperature-dependent susceptibility history, identifies magnetite as the carrier of the main fraction of the remanence, associated with maghemite and hematite in Malan loess (L1), Holocene soil (SO) and last-glacial paleosol (S1). The presence of short-lived direction fluctuations indicates that no significant smoothing occurs in L1 when its remanence is locked, and thus L1 is capable of recording the geomagnetic secular variation (PSV), while the PSV has been severely smoothed or wiped out by pedogenic processes during S1 formation. It has been suggested that the Mono Lake and Laschamp excursions are two independent geomagnetic events based on this study.

  6. A Reliability Assessment of Participant Observational Measures of Leader Behavior in Natural Settings.


    significant increases in accuracy. Such practice exercises stem from the modeling principles of social learning theory (Bandura, 1977). Using videotaped...categories, and the trainees used the instru- ment to record the behaviors they observed. By following the principles of social learning theory (Bandura, 1976...Management, 1979, 2, 85-102. Bandura, A. Social learning theory . Englewood Cliffs, N.J.: Prentice-Hall, 1977. Barrow, J.C. Worker performance and task

  7. Large Extent Volunteer Roadkill and Wildlife Observation Systems as Sources of Reliable Data

    David P. Waetjen


    Full Text Available Large-extent wildlife-reporting systems have sets of goals and methods to facilitate standardized data collection, statistical analysis, informative visualizations, and use in decision-making within the system area. Many systems employ “crowds” of volunteers to collect these data at large spatial extents (e.g., US state or small country scale, especially along roadways. This raises the important question of how these systems could be standardized and the data made broadly useful in ecological and transportation studies, i.e., beyond the system area or goals. We describe two of the first and longest-running systems for volunteer observation of road-associated wildlife (live and dead at the US state scale. The California Roadkill Observation System (CROS, uses a form-based data entry system to report carcasses resulting from wildlife-vehicle collisions (WVC. Operating since 2009, it currently (June, 2017 contains 1,338 users and >54,000 observations of 424 species of ground-dwelling vertebrates and birds, making it one of the most successful examples of crowd-sourced, roadkill and wildlife reporting. Its sister system, the Maine Audubon Wildlife Road Watch ( has a similar structure, and can accept data from transect surveys, animal tracks and scat observations, and reports of “no animal observed.” Both systems can operate as web-applications on a smart-phone (using a web browser, providing the ability to enter observations in the field. Locational accuracy for California observations was estimated to be ±14 m (n = 552 records. Species identification accuracy rate for observations with photographs was 97% (n = 3,700 records. We propose that large extent, volunteer systems can be used to monitor wildlife occurrences along or away from roads and that these observations can be used to inform ecological studies and transportation mitigation planning.

  8. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Chang-bae Moon


    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  9. Observers can reliably identify illusory flashes in the illusory flash paradigm.

    van Erp, Jan B F; Philippi, Tom G; Werkhoven, Peter


    In the illusory flash paradigm, a single flash may be experienced as two flashes when accompanied by two beeps or taps, and two flashes may be experienced as a single flash when accompanied by one beep or tap. The classic paradigm restricts responses to '1' and '2' (2-AFC), ignoring possible qualitative differences between real and illusory flashes and implicitly assuming that illusory flashes are indistinguishable from real flashes. We added a third response category 'different from that of either 1 or 2 flashes' (3-AFC). Eight naïve and 6 experienced observers responded to 160 real and 160 illusory flash trials. Experienced observers were exposed to 1,200 trials before the experiment but without receiving feedback on their performance. The third response category was used for only 4 % of the real flash trials and for 44 % of the illusory flash trials. Experienced observers did so more often (78 %) than naïve observers (18 %). This shows that observers can reliably identify illusory flashes and indicates that mere exposure to illusory flash trials (without feedback) is enough to detect and classify potential qualitative differences between real and illusory flashes.

  10. Inter-observer reliability of radiographic classifications and measurements in the assessment of Perthes' disease.

    Wiig, Ola; Terjesen, Terje; Svenningsen, Svein


    We evaluated the inter-observer agreement of radiographic methods when evaluating patients with Perthes' disease. The radiographs were assessed at the time of diagnosis and at the 1-year follow-up by local orthopaedic surgeons (O) and 2 experienced pediatric orthopedic surgeons (TT and SS). The Catterall, Salter-Thompson, and Herring lateral pillar classifications were compared, and the femoral head coverage (FHC), center-edge angle (CE-angle), and articulo-trochanteric distance (ATD) were measured in the affected and normal hips. On the primary evaluation, the lateral pillar and Salter-Thompson classifications had a higher level of agreement among the observers than the Catterall classification, but none of the classifications showed good agreement (weighted kappa values between O and SS 0.56, 0.54, 0.49, respectively). Combining Catterall groups 1 and 2 into one group, and groups 3 and 4 into another resulted in better agreement (kappa 0.55) than with the original 4-group system. The agreement was also better (kappa 0.62-0.70) between experienced than between less experienced examiners for all classifications. The femoral head coverage was a more reliable and accurate measure than the CE-angle for quantifying the acetabular covering of the femoral head, as indicated by higher intraclass correlation coefficients (ICC) and smaller inter-observer differences. The ATD showed good agreement in all comparisons and had low interobserver differences. We conclude that all classifications of femoral head involvement are adequate in clinical work if the radiographic assessment is done by experienced examiners. When they are less experienced examiners, a 2-group classification or the lateral pillar classification is more reliable. For evaluation of containment of the femoral head, FHC is more appropriate than the CE-angle.

  11. Direct observation of family management: validity and reliability as a function of coder ethnicity and training.

    Yasui, Miwa; Dishion, Thomas J


    This study examines the influence of coder ethnicity on the validity and reliability of direct observations of family management. Eight coders, 4 European American (EA) and 4 African American (AA), were randomly assigned to conduct behavior ratings of videotaped family interactions of European American and African American families, under two conditions: untrained and trained. Results indicated statistical differences between EA and AA coder ratings of family management practices across both untrained and trained conditions, suggesting the presence of ethnocentric perceptions of coders. Specifically, EA coders tended to rate AA families as exhibiting poorer family management skills compared with those of EA families. AA coder ratings for EA and for AA families showed no statistical differences. Although not statistically significant, posttraining coding results indicated a trend toward decreased differences among coder perceptions, especially in improving the validity and reliability of EA coder ratings of AA families. These findings are discussed with respect to recommendations for cross-cultural research as well as general theories of ethnic socialization.

  12. Pyrometamorphism of Fault Zone Rocks Induced by Frictional Heating in High-velocity Friction Tests: Reliable Records of Seismic Slip?

    Ree, J.; Ando, J.; Kim, J.; Han, R.; Shimamoto, T.


    Recognition of seismic slip zone is important for a better understanding of earthquake generation processes in fault zones and paleoseismology. However, there has been no reliable record of ancient seismic slip except pseudotachylyte. Recently, it has been suggested that decomposition (dehydration or decarbonation) products due to frictional heating can be used as a seismic slip record. The decomposition products, however, can be easily rehydrated or recarbonated with pervasive fluid migration in the fault zone after seismic slip, raising some question about their stability as a seismic slip record. Here, we review microstructural and mineralogical changes of the simulated fault zones induced by frictional heating (pyrometamorphism) from high-velocity friction tests (HVFT) on siltstone, sandstone and carbonates at seismic slip rates, and discuss on their stability after seismic slip. HVFT on siltstone generates pseuodotachylyte in the principal slip zone (0.30-0.75 mm thick) with 'damage' layer (0.1-0.2 mm thick) along its margins. Chlorite in the damage layer suffers an incipient dehydration with many voids (0.2-1.0 μm in diameter) in transmission electron microscopy (TEM), appearing as dark tiny spots both in plane-polarized light and back-scattered electron (BSE) photomicrographs. HVFT on brown sandstone induces a color change of wall rocks adjacent to the principal slip zone (brown to red) due to the dehydration of iron hydroxides with frictional heating. These dehydration products in siltstone and sandstone due to frictional heating may be unstable since they would be easily rehydrated with fluid infiltration after a seismic slip. HVFT on carbonates including Carrara marble and siderite-bearing gouges produces decarbonation products of nano-scale lime (CaO) and magnetite (Fe3O4), respectively. Lime is a very unstable phase whereas magnetite is a stable and thus may be used as an indicator of seismic slip. The simulated fault zones of Carrara marble contain

  13. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    Kevin A. Hallgren


    Full Text Available Many research designs require the assessment of inter-rater reliability (IRR to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR.

  14. A secure and reliable monitor and control system for remote observing with the Large Millimeter Telescope

    Wallace, Gary; Souccar, Kamal; Malin, Daniella


    Remote access to telescope monitor and control capabilities necessitates strict security mechanisms to protect the telescope and instruments from malicious or unauthorized use, and to prevent data from being stolen, altered, or corrupted. The Large Millimeter Telescope (LMT) monitor and control system (LMTMC) utilizes the Common Object Request Broker Architecture (CORBA) middleware technology to connect remote software components. The LMTMC provides reliable and secure remote observing by automatically generating SSLIOP enabled CORBA objects. TAO, the ACE open source Object Request Broker (ORB), now supports secure communications by implementing the Secure Socket Layer Inter-ORB Protocol (SSLIOP) as a pluggable protocol. This capability supplies the LMTMC with client and server authentication, data integrity, and encryption. Our system takes advantage of the hooks provided by TAO SSLIOP to implement X.509 certificate based authorization. This access control scheme includes multiple authorization levels to enable granular access control.

  15. WelFur-mink: inter-observer reliability of on-farm welfare assessment in the growth season

    Møller, Steen Henrik; Rousing, Tine; Hansen, Steffen W


    the consequences of operating with several observers. Animal based measures on 9 Danish mink farms were taken in November 2011. Eight observers individually, but in paris on herd level, carried out data collection on the measures involving subjective grading, e.g. mink "activity", "injuries" and "fur......-chewing" on approximately 120 cages with mink per farm. The assessment of the two observers gave similar frequencies of welfare problems and thus similar welfare assessments. The individual problems observed were however, not the same leading to poor or fair, but rarely good inter observer reliability. Despite the skilled...... assessors, the short training was not sufficient to get highly reliable results. No overall difference was found between the inter observer reliability of cages with ≤2 or ≥3 mink in a cage. More training and better training material and, for some measures, observation procedures are needed in order...

  16. Utility and Reliability of an App for the System for Observing Play and Recreation in Communities (iSOPARC®)

    Santos, Maria P. M.; Rech, Cassiano R.; Alberico, Claudia O.; Fermino, Rogério C.; Rios, Ana P.; David, João; Reis, Rodrigo S.; Sarmiento, Olga L.; McKenzie, Thomas L.; Mota, Jorge


    The app for the System for Observing Play and Recreation in Communities (iSOPARC®) was developed to enhance System for Observing Play and Recreation in Communities data collection and management. The study aim was to examine the usability and inter-rater reliability of iSOPARC®. Trained observers collected data in 16 park areas in two Latin…

  17. Reliability and validity of the Edinburgh Visual Gait Score for cerebral palsy when used by inexperienced observers.

    Ong, A M L; Hillman, S J; Robb, J E


    The Edinburgh Visual Gait Score (EVGS) for cerebral palsy has been validated for observer reliability and validity for observers experienced in gait analysis. This study investigated the reliability and validity of the EVGS for observers inexperienced in gait analysis. Six medical students used the score to analyse videotapes from the original study by Read et al. [Read HS, Hazlewood ME, Hillman SJ, Prescott RJ, Robb JE. Edinburgh visual gait score for use in cerebral palsy. J Pediatr Orthop 2003;23:296-301]. These were viewed on two separate occasions to provide inter- and intra-observer reliability, and the results of the numerical items were compared to those from three-dimensional (3D) gait analyses for validity. Observer agreement was tested using Coefficient of Repeatability (CoR), percentage of complete agreement and the kappa statistic. The CoR for inter-observer agreement for inexperienced observers was 5.99/5.07 (Session 1/Session 2) compared to 4.60/3.95 (Session 1/Session 2) for experienced observers. The CoR for intra-observer agreement for inexperienced observers was 5.15 compared to 4.21 for experienced observers. There was complete agreement for 52% of the 10 numerical items with 3D-gait analysis data for inexperienced observers compared to 64% for experienced observers. Ranking of reliability of individual items was similar between the two groups and was generally best for events occurring at the foot and ankle. Observations of gait events by the inexperienced observers using the EVGS were reasonably reliable but not very accurate when compared to experienced observers and 3D-gait analysis.

  18. Cost-efficient measurement strategies for posture observations based on video recordings.

    Mathiassen, Svend Erik; Liv, Per; Wahlström, Jens


    Assessment of working postures by observation is a common practice in ergonomics. The present study investigated whether monetary resources invested in a video-based posture observation study should preferably be spent in collecting many video recordings of the work and have them observed once by one observer, or in having multiple observers rate postures repeatedly from fewer videos. The study addressed this question from a practitioner's perspective by focusing two plausible scenarios: documenting the mean exposure of one individual, and of a specific occupational group. Using a data set of observed working postures among hairdressers, empirical values of posture variability, observer variability, and costs for recording and observing one video were entered into equations expressing the total cost of data collection and the information (defined as 1/SD) provided by the resulting estimates of two variables: percentage time with the arm elevated 90°. Sixteen measurement strategies involving 1-4 observers repeating their posture ratings 1-4 times were examined for budgets up to €2000. For both posture variables and in both the individual and group scenario, the most cost-efficient strategy at any specific budget was to engage 3-4 observers and/or having observer(s) rate postures multiple times each. Between 17% and 34% less information was produced when using the commonly practiced approach of having one observer rate a number of video recordings one time each. We therefore recommend observational posture assessment to be based on video recordings of work, since this allows for multiple observations; and to allocate monetary resources to repeated observations rather than many video recordings.

  19. In vivo application of an optical segment tracking approach for bone loading regimes recording in humans: a reliability study.

    Yang, Peng-Fei; Sanno, Maximilian; Ganse, Bergita; Koy, Timmo; Brüggemann, Gert-Peter; Müller, Lars Peter; Rittweger, Jörn


    This paper demonstrates an optical segment tracking (OST) approach for assessing the in vivo bone loading regimes in humans. The relative movement between retro-reflective marker clusters affixed to the tibia cortex by bone screws was tracked and expressed as tibia loading regimes in terms of segment deformation. Stable in vivo fixation of bone screws was tested by assessing the resonance frequency of the screw-marker structure and the relative marker position changes after hopping and jumping. Tibia deformation was recorded during squatting exercises to demonstrate the reliability of the OST approach. Results indicated that the resonance frequencies remain unchanged prior to and after all exercises. The changes of Cardan angle between marker clusters induced by the exercises were rather minor, maximally 0.06°. The reproducibility of the deformation angles during squatting remained small (0.04°/m-0.65°/m). Most importantly, all surgical and testing procedures were well tolerated. The OST method promises to bring more insights of the mechanical loading acting on bone than in the past.

  20. When the third party observer of a neuropsychological evaluation is an audio-recorder.

    Constantinou, Marios; Ashendorf, Lee; McCaffrey, Robert J


    The presence of third parties during neuropsychological evaluations is an issue of concern for contemporary neuropsychologists. Previous studies have reported that the presence of an observer during neuropsychological testing alters the performance of individuals under evaluation. The present study sought to investigate whether audio-recording affects the neuropsychological test performance of individuals in the same way that third party observation does. In the presence of an audio-recorder the performance of the participants on memory tests declined. Performance on motor tests, on the other hand, was not affected by the presence of an audio-recorder. The implications of these findings in forensic neuropsychological evaluations are discussed.

  1. WelFur-mink: inter-observer reliability of on-farm welfare assessment in the growth season

    Møller, Steen Henrik; Rousing, Tine; Hansen, Steffen W


    A welfare assessment system should be "high" in validity, robustness and feasibility - the latter both as regards time and costs. Therefore, observers must be able to perform the on-farm assessment with acceptable validity after some training. Based on empiric data this paper evaluates...... assessors, the short training was not sufficient to get highly reliable results. No overall difference was found between the inter observer reliability of cages with ≤2 or ≥3 mink in a cage. More training and better training material and, for some measures, observation procedures are needed in order...


    Kristensen, Ingeborg H; Trillingsgaard, Tea; Simonsen, Marianne; Kronborg, Hanne


    Health visitors need competences to promote healthy early parent-infant relationships. The aims of this study were to explore whether there are differences between groups of health visitors with and without additional parenting program education in terms of their knowledge of infant-parent interaction and their observation and assessment skills of such interactions. The cross-sectional study included 36 health visitors' certified Marte Meo therapists and 85 health visitors without additional parenting program education. Health visitors' observation skills were measured assessing five video-recorded mother-infant interactions. A questionnaire was used to measure their intention, self-efficacy, and knowledge. More certified Marte Meo therapists than health visitors without additional parenting program education reported a significantly higher mean level of knowledge of the early relationship, 6.42 (95% CI; 6.18-6.66) versus 5.05 (95% CI; 4.86-6.10), p = .04; and more certified Marte Meo therapists than health visitors without additional parenting program education reported a higher mean level of knowledge of infant self-regulation, 2.44 (95% CI; 2.18-2.71) versus 1.83 (95% CI; 1.62-2.03), p < .001. In the latter group, 54% (95% CI; 0.43-0.64) reported a significantly higher need for further education versus 22% (95% CI; 0.11-0.39), p = .001. Compared to health visitors without any parenting program education, health visitors certified as Marte Meo therapists reported a significantly higher frequency of correct assessment of mothers' sensitivity in two of five video-recordings, with 77.78% (95% CI; 0.61-0.87) compared to 45.88% (95% CI; 0.35-0.57) in Video 3, p = .001, and 69.44% (95% CI; 0.52-0.82) compared to 49.41% (95% CI; 0.39-0.60) in Video 4, p = .04, respectively. The results of the present study support the use of video-based education of health visitors to increase their knowledge of and skills in assessing parent-infant interactions. Randomized controlled

  3. Gait in children with cerebral palsy - Observer reliability of Physician Rating Scale and Edinburgh Visual Gait Analysis Interval Testing Scale

    Maathuis, KGB; van der Schans, CP; van Iperen, A; Rietman, HS; Geertzen, JHB


    The aim of this study was to test the inter- and intra-observer reliability of the Physician Rating Scale (PRS) and the Edinburgh Visual Gait Analysis Interval Testing (GAIT) scale for use in children with cerebral palsy (CP). Both assessment scales are quantitative observational scales, evaluating

  4. Gait in children with cerebral palsy : observer reliability of Physician Rating Scale and Edinburgh Visual Gait Analysis Interval Testing scale

    Maathuis, KGB; van der Schans, CP; van Iperen, A; Rietman, HS; Geertzen, JHB


    The aim of this study was to test the inter- and intra-observer reliability of the Physician Rating Scale (PRS) and the Edinburgh Visual Gait Analysis Interval Testing (GAIT) scale for use in children with cerebral palsy (CP). Both assessment scales are quantitative observational scales, evaluating

  5. The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD statement.

    Eric I Benchimol


    Full Text Available Routinely collected health data, obtained for administrative and clinical purposes without specific a priori research goals, are increasingly used for research. The rapid evolution and availability of these data have revealed issues not addressed by existing reporting guidelines, such as Strengthening the Reporting of Observational Studies in Epidemiology (STROBE. The REporting of studies Conducted using Observational Routinely collected health Data (RECORD statement was created to fill these gaps. RECORD was created as an extension to the STROBE statement to address reporting items specific to observational studies using routinely collected health data. RECORD consists of a checklist of 13 items related to the title, abstract, introduction, methods, results, and discussion section of articles, and other information required for inclusion in such research reports. This document contains the checklist and explanatory and elaboration information to enhance the use of the checklist. Examples of good reporting for each RECORD checklist item are also included herein. This document, as well as the accompanying website and message board (, will enhance the implementation and understanding of RECORD. Through implementation of RECORD, authors, journals editors, and peer reviewers can encourage transparency of research reporting.

  6. Gait and Lower Limb Observation of Paediatrics (GALLOP): development of a consensus based paediatric podiatry and physiotherapy standardised recording proforma.

    Cranage, Simone; Banwell, Helen; Williams, Cylie M


    Paediatric gait and lower limb assessments are frequently undertaken in podiatry and physiotherapy clinical practice and this is a growing area of expertise within Australia. No concise paediatric standardised recording proforma exists to assist clinicians in clinical practice. The aim of this study was to develop a gait and lower limb standardised recording proforma guided by the literature and consensus, for assessment of the paediatric foot and lower limb in children aged 0-18 years. Expert Australian podiatrists and physiotherapists were invited to participate in a three round Delphi survey panel using the online Qualtrics(©) survey platform. The first round of the survey consisted of open-ended questions on paediatric gait and lower limb assessment developed from existing templates and a literature search of standardised lower limb assessment methods. Rounds two and three consisted of statements developed from the first round responses. Questions and statements were included in the final proforma if 70 % or more of the participants indicated consensus or agreement with the assessment method and if there was support within the literature for paediatric age-specific normative data with acceptable reliability of outcome measures. There were 17 of the 21 (81 %) participants who completed three rounds of the survey. Consensus was achieved for 41 statements in Round one, 54 statements achieved agreement in two subsequent rounds. Participants agreed on 95 statements relating to birth history, developmental history, hip measurement, rotation of the lower limb, ankle range of motion, foot posture, balance and gait. Assessments with acceptable validity and reliability were included within the final Gait and Lower Limb Observation of Paediatrics (GALLOP) proforma. The GALLOP proforma is a consensus based, systematic and standardised way to collect information and outcome measures in paediatric lower limb assessment. This standardised recording proforma will assist

  7. Validity and inter-observer reliability of subjective hand-arm vibration assessments

    Coenen, P.; Formanoy, M.; Douwes, M.; Bosch, T.; Kraker, H. de


    Exposure to mechanical vibrations at work (e.g., due to handling powered tools) is a potential occupational risk as it may cause upper extremity complaints. However, reliable and valid assessment methods for vibration exposure at work are lacking. Measuring hand-arm vibration objectively is often di

  8. Scoring haemophilic arthropathy on X-rays: improving inter- and intra-observer reliability and agreement using a consensus atlas

    Foppen, Wouter; Schaaf, Irene C. van der; Beek, Frederik J.A. [University Medical Center Utrecht, Department of Radiology (Netherlands); Verkooijen, Helena M. [University Medical Center Utrecht, Department of Radiology (Netherlands); University Medical Center Utrecht, Julius Center for Health Sciences and Primary Care, Utrecht (Netherlands); Fischer, Kathelijn [University Medical Center Utrecht, Julius Center for Health Sciences and Primary Care, Utrecht (Netherlands); University Medical Center Utrecht, Van Creveldkliniek, Department of Hematology, Utrecht (Netherlands)


    The radiological Pettersson score (PS) is widely applied for classification of arthropathy to evaluate costly haemophilia treatment. This study aims to assess and improve inter- and intra-observer reliability and agreement of the PS. Two series of X-rays (bilateral elbows, knees, and ankles) of 10 haemophilia patients (120 joints) with haemophilic arthropathy were scored by three observers according to the PS (maximum score 13/joint). Subsequently, (dis-)agreement in scoring was discussed until consensus. Example images were collected in an atlas. Thereafter, second series of 120 joints were scored using the atlas. One observer rescored the second series after three months. Reliability was assessed by intraclass correlation coefficients (ICC), agreement by limits of agreement (LoA). Median Pettersson score at joint level (PS{sub joint}) of affected joints was 6 (interquartile range 3-9). Using the consensus atlas, inter-observer reliability of the PS{sub joint} improved significantly from 0.94 (95 % confidence interval (CI) 0.91-0.96) to 0.97 (CI 0.96-0.98). LoA improved from ±1.7 to ±1.1 for the PS{sub joint}. Therefore, true differences in arthropathy were differences in the PS{sub joint} of >2 points. Intra-observer reliability of the PS{sub joint} was 0.98 (CI 0.97-0.98), intra-observer LoA were ±0.9 points. Reliability and agreement of the PS improved by using a consensus atlas. (orig.)

  9. Observer reliability of CT angiography in the assessment of acute ischaemic stroke: data from the Third International Stroke Trial

    Mair, Grant; Farrall, Andrew J.; Sellar, Robin J.; Mollison, Daisy; Sakka, Eleni; Palmer, Jeb; Wardlaw, Joanna M. [University of Edinburgh, Western General Hospital, Division of Neuroimaging Sciences, Edinburgh (United Kingdom); Kummer, Ruediger von [Dresden University Stroke Centre, University Hospital, Department of Neuroradiology, Dresden (Germany); Adami, Alessandro [Sacro Cuore-Don Calabria Hospital, Stroke Center, Department of Neurology, Negrar (Italy); White, Philip M. [Stroke Research Group, Newcastle upon Tyne (United Kingdom); Adams, Matthew E. [National Hospital for Neurology and Neurosurgery, Department of Neuroradiology, London (United Kingdom); Yan, Bernard [Royal Melbourne Hospital, Neurovascular Research Group, Parkville (Australia); Demchuk, Andrew M. [Calgary Stroke Program, Department of Clinical Neurosciences, Calgary (Canada); Ramaswamy, Rajesh; Rodrigues, Mark A.; Samji, Karim; Baird, Andrew J. [Royal Infirmary of Edinburgh, Department of Radiology, Edinburgh (United Kingdom); Boyd, Elena V. [Northwick Park Hospital, Department of Radiology, Harrow (United Kingdom); Cohen, Geoff; Perry, David; Sandercock, Peter A.G. [University of Edinburgh, Western General Hospital, Division of Clinical Neurosciences, Edinburgh (United Kingdom); Lindley, Richard [University of Sydney, Westmead Hospital Clinical School and The George Institute for Global Health, Sydney (Australia); Collaboration: The IST-3 Collaborative Group


    CT angiography (CTA) is often used for assessing patients with acute ischaemic stroke. Only limited observer reliability data exist. We tested inter- and intra-observer reliability for the assessment of CTA in acute ischaemic stroke. We selected 15 cases from the Third International Stroke Trial (IST-3, ISRCTN25765518) with various degrees of arterial obstruction in different intracranial locations on CTA. To assess inter-observer reliability, seven members of the IST-3 expert image reading panel (>5 years experience reading CTA) and seven radiology trainees (<2 years experience) rated all 15 scans independently and blind to clinical data for: presence (versus absence) of any intracranial arterial abnormality (stenosis or occlusion), severity of arterial abnormality using relevant scales (IST-3 angiography score, Thrombolysis in Cerebral Infarction (TICI) score, Clot Burden Score), collateral supply and visibility of a perfusion defect on CTA source images (CTA-SI). Intra-observer reliability was assessed using independently repeated expert panel scan ratings. We assessed observer agreement with Krippendorff's-alpha (K-alpha). Among experienced observers, inter-observer agreement was substantial for the identification of any angiographic abnormality (K-alpha = 0.70) and with an angiography assessment scale (K-alpha = 0.60-0.66). There was less agreement for grades of collateral supply (K-alpha = 0.56) or for identification of a perfusion defect on CTA-SI (K-alpha = 0.32). Radiology trainees performed as well as expert readers when additional training was undertaken (neuroradiology specialist trainees). Intra-observer agreement among experts provided similar results (K-alpha = 0.33-0.72). For most imaging characteristics assessed, CTA has moderate to substantial observer agreement in acute ischaemic stroke. Experienced readers and those with specialist training perform best. (orig.)

  10. Early Parent-infant Interactions; Are Health Visitors' Observations Reliable?:Cross-sectional study of Marte Meo therapists' Competence

    Kristensen, Ingeborg Hedegaard; Simonsen, Marianne; Trillingsgaard, Tea; Kronborg, Hanne


    Early Parent-infant Interactions: Are Health Visitors’ Observations Reliable?Cross-sectional study of Danish Marte Meo Therapists’ Competence The quality of parent-infant relations is essential for infant development, and its assessment by health visitors is potentially important to promote healthy relations. The objectives of this study were to explore health visitors’ observation skills assessing parent-infant interaction and their intention, self-efficacy and knowledge in early relationshi...

  11. Reliable assessment of general surgeons' non-technical skills based on video-recordings of patient simulated scenarios

    Spanager, Lene; Beier-Holgersen, Randi; Dieckmann, Peter;


    Nontechnical skills are essential for safe and efficient surgery. The aim of this study was to evaluate the reliability of an assessment tool for surgeons' nontechnical skills, Non-Technical Skills for Surgeons dk (NOTSSdk), and the effect of rater training.......Nontechnical skills are essential for safe and efficient surgery. The aim of this study was to evaluate the reliability of an assessment tool for surgeons' nontechnical skills, Non-Technical Skills for Surgeons dk (NOTSSdk), and the effect of rater training....

  12. Intra-observer and interobserver reliability ofOne Leg Stand Test as a measure of posturalbalance in low back pain patients

    Maribo, Thomas; Iversen, Elena; Andersen, Niels Trolle


    to stand for the maximum time, and no further analysis was done. Eyes closed: intra-observer reliability was tested in 21 patients; absolute reliability showed a standard error of the measurement (SEM) of 2.48 s and a minimal detectable change (MDC) of 6.88. The relative reliability was acceptable......Objective: To determine the absolute and relative reliability of intra-observer and interobserver To determine the absolute and relative reliability of intra-observer and interobserver measurements of postural balance using the One Leg Stand Test in patients with low back pain. Patients and methods...... with an intra class correlation coefficient (ICC) of 0.86. Interobserver reliability was tested in 27 patients; absolute reliability showed a SEM of 1.42 s and a MDC of 3.95. The relative reliability was acceptable with an ICC of 0.91. Conclusions: The One Leg Stand Test can be used to test postural balance...

  13. Building a reliable measure for unobtrusive observations of street-connecting pedestrian walkways.

    Wilson, Nick; Brander, Bill; Mansoor, Osman D; Pearson, Amber L


    There is evidence that good urban design, including street connectivity, facilitates walking for transport. We, therefore, piloted a short survey on 118 such walkways in nine suburbs in Wellington, New Zealand's capital. The instrument appeared feasible to use and performed well in terms of inter-rater reliability (median Kappa score for 15 items: 0.88). The study identified both favorable features (e.g., railings by steps), but also problematic ones (e.g., concerning graffiti, litter, and insufficient lighting and signage). There is scope for routinising the monitoring of walkway quality so that citizens and government agencies can work together to enhance urban walkability.

  14. A record of change - Science and elder observations on the Navajo Nation

    Hiza-Redsteer, Margaret M.; Wessells, Stephen M.


    A Record of Change - Science and Elder Observations on the Navajo Nation is a 25-minute documentary about combining observations from Navajo elders with conventional science to determine how tribal lands and culture are affected by climate change. On the Navajo Nation, there is a shortage of historical climate data, making it difficult to assess changing environmental conditions.This video reveals how a team of scientists, anthropologists, and translators combined the rich local knowledge of Navajo elders with recent scientific investigation to effectively document environmental change. Increasing aridity and declining snowfall in this poorly monitored region of the Southwest are accompanied by declining river flow and migrating sand dunes. The observations of Navajo elders verify and supplement this record of change by informing how shifting weather patterns are reflected in Navajo cultural practices and living conditions.

  15. Reliability of semiology description.

    Heo, Jae-Hyeok; Kim, Dong Wook; Lee, Seo-Young; Cho, Jinwhan; Lee, Sang-Kun; Nam, Hyunwoo


    Seizure semiology is important for classifying patients' epilepsy. Physicians usually get most of the seizure information from observers though there have been few reports on the reliability of the observers' description. This study aims at determining the reliability of observers' description of the semiology. We included 92 patients who had their habitual seizures recorded during video-EEG monitoring. We compared the semiology described by the observers with that recorded on the videotape, and reviewed which characteristics of the observers affected the reliability of their reported data. The classification of seizures and the individual components of the semiology based only on the observer-description was somewhat discordant compared with the findings from the videotape (correct classification, 85%). The descriptions of some ictal behaviors such as oroalimentary automatism, tonic/dystonic limb posturing, and head versions were relatively accurate, but those of motionless staring and hand automatism were less accurate. The specified directions by the observers were relatively correct. The accuracy of the description was related to the educational level of the observers. Much of the information described by well-educated observers is reliable. However, every physician should keep in mind the limitations of this information and use this information cautiously.

  16. Gridded sunshine duration climate data record for Germany based on combined satellite and in situ observations

    Walawender, Jakub; Kothe, Steffen; Trentmann, Jörg; Pfeifroth, Uwe; Cremer, Roswitha


    The purpose of this study is to create a 1 km2 gridded daily sunshine duration data record for Germany covering the period from 1983 to 2015 (33 years) based on satellite estimates of direct normalised surface solar radiation and in situ sunshine duration observations using a geostatistical approach. The CM SAF SARAH direct normalized irradiance (DNI) satellite climate data record and in situ observations of sunshine duration from 121 weather stations operated by DWD are used as input datasets. The selected period of 33 years is associated with the availability of satellite data. The number of ground stations is limited to 121 as there are only time series with less than 10% of missing observations over the selected period included to keep the long-term consistency of the output sunshine duration data record. In the first step, DNI data record is used to derive sunshine hours by applying WMO threshold of 120 W/m2 (SDU = DNI ≥ 120 W/m2) and weighting of sunny slots to correct the sunshine length between two instantaneous image data due to cloud movement. In the second step, linear regression between SDU and in situ sunshine duration is calculated to adjust the satellite product to the ground observations and the output regression coefficients are applied to create a regression grid. In the last step regression residuals are interpolated with ordinary kriging and added to the regression grid. A comprehensive accuracy assessment of the gridded sunshine duration data record is performed by calculating prediction errors (cross-validation routine). "R" is used for data processing. A short analysis of the spatial distribution and temporal variability of sunshine duration over Germany based on the created dataset will be presented. The gridded sunshine duration data are useful for applications in various climate-related studies, agriculture and solar energy potential calculations.

  17. Reliability and validity of the Finnish version of the motor observation questionnaire for teachers

    Asunta, P.; Viholainen, H.; Ahonen, T.; Cantell, M.; Westerholm, J.; Schoemaker, M.M.; Rintala, P.

    Objectives: Observational screening instruments are often used as an effective, economical first step in the identification of children with Developmental Coordination Disorder (DCD). The aim was to investigate the psychometric properties of the Finnish version of the Motor Observation Questionnaire

  18. Moving picture recording and observation of femtosecond light pulse propagation using a rewritable holographic material

    Yamamoto, Seiji; Takimoto, Tetsuya; Tosa, Kazuya; Kakue, Takashi [Graduate School of Science and Technology, Kyoto Institute of Technology, Matsugasaki, Sakyo, Kyoto 606-8585 (Japan); Awatsuji, Yasuhiro, E-mail: [Graduate School of Science and Technology, Kyoto Institute of Technology, Matsugasaki, Sakyo, Kyoto 606-8585 (Japan); Nishio, Kenzo [Advanced Technology Center, Kyoto Institute of Technology, Matsugasaki, Sakyo, Kyoto 606-8585 (Japan); Ura, Shogo [Graduate School of Science and Technology, Kyoto Institute of Technology, Matsugasaki, Sakyo, Kyoto 606-8585 (Japan); Kubota, Toshihiro [Kubota Holography Laboratory, Corporation, Nishihata 34-1-609, Ogura, Uji 611-0042 (Japan)


    We succeeded in recording and observing femtosecond light pulse propagation as a form of moving picture by means of light-in-flight recording by holography using a rewritable holographic material, for the first time. We used a femtosecond pulsed laser whose center wavelength and duration were 800 nm and {approx}120 fs, respectively. A photo-conductor plastic hologram was used as a rewritable holographic material. The femtosecond light pulse was collimated and obliquely incident to the diffuser plate. The behavior of the cross-section between the collimated femtosecond light pulse and the diffuser plate was recorded on the photo-conductor plastic hologram. We experimentally obtained a spatially and temporally continuous moving picture of the femtosecond light pulse propagation for 58.3 ps. Meanwhile, we also investigated the rewritable performance of the photo-conductor plastic hologram. As a result, we confirmed that ten-time rewriting was possible for a photo-conductor plastic hologram.

  19. Intra- and inter-observer reliability of MRI examination of intervertebral disc abnormalities in patients with cervical myelopathy

    Braga-Baiak, Andresa [Center for Excellence in Surgical Outcomes, Duke University Medical Center, Durham, NC (United States); Post-graduation Program, Department of Radiology, University of Sao Paulo (Brazil); Shah, Anand [Center for Excellence in Surgical Outcomes, Duke University Medical Center, Durham, NC (United States); Pietrobon, Ricardo [Center for Excellence in Surgical Outcomes, Duke University Medical Center, Durham, NC (United States); Department of Surgery, Duke University Medical Center, Durham, NC (United States); Braga, Larissa [Center for Excellence in Surgical Outcomes, Duke University Medical Center, Durham, NC (United States); University of Nebraska Medical Center, Lincoln NE (United States); Neto, Arnolfo Carvalho [Clinica DAPI, Curitiba (Brazil); Section of Diagnostic Radiology, Department of Internal Medicine, Universidade Federal do Parana (Brazil); Cook, Chad [Center for Excellence in Surgical Outcomes, Duke University Medical Center, Durham, NC (United States); Division of Physical Therapy, Duke University Medical Center, Durham, NC (United States)], E-mail:


    Purpose: Intervertebral cervical disc herniation (CDH) is a relatively common disorder that can coexist with degenerative changes to worsen cervicogenic myelopathy. Despite the frequent disc abnormalities found in asymptomatic populations, magnetic resonance imaging (MRI) is considered excellent at detecting cervical spine myelopathy (CSM) associated with disc abnormality. The objective of this study was to investigate the intra- and inter-observer reliability of MRI detection of CSM in subjects who also had co-existing intervertebral disc abnormalities. Materials and methods: Seven experienced radiologists reviewed twice the MRI of 10 patients with clinically and/or imaging determined myelopathy. MRI assessment was performed individually, with and without operational guidelines. A Fleiss Kappa statistic was used to evaluate the intra- and inter-observer agreement. Results: The study found high intra-observer percent agreement but relatively low Kappa values on selected variables. Inter-observer reliability was also low and neither observation was improved with operational guidelines. We believe that those low values may be associated with the base rate problem of Kappa. Conclusion: In conclusion, this study demonstrated high intra-observer percent agreement in MR examination for intervertebral disc abnormalities in patients with underlying cervical myelopathy, but differing levels of intra- and inter-observer Kappa agreement among seven radiologists.

  20. Reliability of lower leg proximal end and forefoot kinematics during different paces of barefoot racewalking on a treadmill using a motion recorder (MVP-RF8-BC).

    Wang, Hongzhao; Huo, Ming; An, Xiangde; Li, Yong; Onoda, Ko; Li, Desheng; Huang, Qiuchen; Maruyama, Hitoshi


    [Purpose] This study was performed to investigate the changes in lower leg proximal end and forefoot kinematics, and reliability of measurement during different paces of barefoot racewalking on treadmill. [Subjects] Eleven junior racewalking men participated in this study. [Methods] To identify changes in lower leg proximal end and forefoot kinematics, during different paces of barefoot racewalking on a treadmill, a wireless motion recorder (MVP-RF8-BC) was used. Interclass correlation coefficients (ICC 1, 2) were used to estimate reliability. [Results] There were significant differences in the lower leg proximal end and forefoot maximum medial/lateral rotations at a pace of 9 km/h compared with those at a pace of 5 km/h pace. The intra-examiner reliability estimates ranged from 0.82 and 0.89 to 0.87 and 0.93 for lower leg proximal end inversion/eversion rotation and medial/lateral rotation, and from 0.92 and 0.84 to 0.93 and 0.91 for forefoot inversion/eversion rotation and medial/lateral rotation. [Conclusion] We conclude that the lower leg proximal end and forefoot kinematics of barefoot racewalking on a treadmill are influenced by different paces and that assessment of lower leg proximal end and forefoot kinematics by means of the wireless motion recorder (MVP-RF8-BC) is adequately reliable. This information may be useful for determining exercise prescriptions.

  1. Is surgeons' experience important on intra- and inter-observer reliability of classifications used for adult femoral neck fracture?

    Turgut, Ali; Kumbaracı, Mert; Kalenderer, Önder; İlyas, Gökhan; Bacaksız, Tayfun; Karapınar, Levent


    Purpose: To evaluate whether surgeons' experience affect inter- and intra-observer reliability among mostly used classification systems for femoral neck fractures. Material and Methods: A power point presentation was prepared with 107 slides which were antero-posterior radiographs of each femoral neck fracture. 5 residents, 5 orthopaedic surgeons and 5 senior orthopaedic surgeons reviewed this presentation and classified the fractures according to Garden, Pauwels and AO classifications. ...

  2. Challenges in assessing the contribution of climate change to observed record-breaking heat waves

    Perlwitz, J.; Xu, T.; Quan, X.; Hoerling, M. P.; Dole, R. M.


    Record-setting heat waves have large impacts on public health and society due to increased mortality rate, wild fires, property damages and agricultural loss. There is increasing interest in understanding the causes of such extreme events including the role of climate change. We use the example of the link between atmospheric blocking frequency and summertime seasonal temperature extreme to address some challenges in determining the relative contributions of natural variability and climate change on the occurrence and magnitude of extreme climate-related events. We utilize the 62-year record of observational data from 1960 to 2011 and long integrations with the NCARs Community Climate System Model Version 4 (CCSM4). This climate model represents well atmospheric blocking frequency and related weather features over the European/Ural region. Both observations and long climate integrations suggest that seasonal temperature extremes over the Northern European/Ural region are strongly conditioned by blocking. We illustrate that one challenge in climate event attribution is related to the fact that very long records are necessary to sufficiently sample the frequency of occurrence of the principal driver of a record-setting climate event. We further illustrate that there is a strong regional dependence on how the link between blocking frequency and extreme temperature anomalies is modified due to climate change suggesting that event attribution results are often not transferable from one region to another.

  3. Naturalistic observation of health-relevant social processes: the electronically activated recorder methodology in psychosomatics.

    Mehl, Matthias R; Robbins, Megan L; Deters, Fenne Große


    This article introduces a novel observational ambulatory monitoring method called the electronically activated recorder (EAR). The EAR is a digital audio recorder that runs on a handheld computer and periodically and unobtrusively records snippets of ambient sounds from participants' momentary environments. In tracking moment-to-moment ambient sounds, it yields acoustic logs of people's days as they naturally unfold. In sampling only a fraction of the time, it protects participants' privacy and makes large observational studies feasible. As a naturalistic observation method, it provides an observer's account of daily life and is optimized for the objective assessment of audible aspects of social environments, behaviors, and interactions (e.g., habitual preferences for social settings, idiosyncratic interaction styles, subtle emotional expressions). This article discusses the EAR method conceptually and methodologically, reviews prior research with it, and identifies three concrete ways in which it can enrich psychosomatic research. Specifically, it can (a) calibrate psychosocial effects on health against frequencies of real-world behavior; (b) provide ecological observational measures of health-related social processes that are independent of self-report; and (c) help with the assessment of subtle and habitual social behaviors that evade self-report but have important health implications. An important avenue for future research lies in merging traditional self-report-based ambulatory monitoring methods with observational approaches such as the EAR to allow for the simultaneous yet methodologically independent assessment of inner, experiential aspects (e.g., loneliness) and outer, observable aspects (e.g., social isolation) of real-world social processes to reveal their unique effects on health.

  4. Development and reliability of an observation method to assess food intake of young children in child care.

    Ball, Sarah C; Benjamin, Sara E; Ward, Dianne S


    To our knowledge, a direct observation protocol for assessing dietary intake among young children in child care has not been published. This article reviews the development and testing of a diet observation system for child care facilities that occurred during a larger intervention trial. Development of this system was divided into five phases, done in conjunction with a larger intervention study; (a) protocol development, (b) training of field staff, (c) certification of field staff in a laboratory setting, (d) implementation in a child-care setting, and (e) certification of field staff in a child-care setting. During the certification phases, methods were used to assess the accuracy and reliability of all observers at estimating types and amounts of food and beverages commonly served in child care. Tests of agreement show strong agreement among five observers, as well as strong accuracy between the observers and 20 measured portions of foods and beverages with a mean intraclass correlation coefficient value of 0.99. This structured observation system shows promise as a valid and reliable approach for assessing dietary intake of children in child care and makes a valuable contribution to the growing body of literature on the dietary assessment of young children.

  5. Insights from Synthetic Star-forming Regions: I. Reliable Mock Observations from SPH Simulations

    Koepferl, Christine M; Dale, James E; Biscani, Francesco


    Through synthetic observations of a hydrodynamical simulation of an evolving star-forming region, we assess how the choice of observational techniques affects the measurements of properties which trace star formation. Testing and calibrating observational measurements requires synthetic observations which are as realistic as possible. In this part of the paper series (Paper I), we explore different techniques for how to map the distributions of densities and temperatures from the particle-based simulations onto a Voronoi mesh suitable for radiative transfer and consequently explore their accuracy. We further test different ways to set up the radiative transfer in order to produce realistic synthetic observations. We give a detailed description of all methods and ultimately recommend techniques. We have found that the flux around 20 microns is strongly overestimated when blindly coupling the dust radiative transfer temperature with the hydrodynamical gas temperature. We find that when instead assuming a consta...

  6. New record of nuptial gift observed in Trechalea amazonica (Araneae, Lycosoidea, Trechaleidae

    Estevam Luís Cruz da Silva


    Full Text Available The first record of a nuptial gift in Trechalea amazonica F.O.P.-Cambridge, 1903, is herein presented. The observations were made in the Oriximiná, Pará, northern Brazil. Two males were found on tree trunks near the water, each holding in the chelicerae a small prey wrapped in silk. This is the second confirmed observa- tion of the nuptial gift behavior in the family Trechaleidae, first in the genus Trechalea Thorell, 1869, and later in Paratrechalea Carico, 2005 from southern Brazil. This new observation could be used in phylogenetic and evolutionary studies for this poorly studied spider family.

  7. Review: Moisture loading—the hidden information in groundwater observation well records

    van der Kamp, Garth; Schmidt, Randy


    Changes of total moisture mass above an aquifer such as snow accumulation, soil moisture, and storage at the water table, represent changes of mechanical load acting on the aquifer. The resulting moisture-loading effects occur in all observation well records for confined aquifers. Deep observation wells therefore act as large-scale geological weighing lysimeters, referred to as "geolysimeters". Barometric pressure effects on groundwater levels are a similar response to surface loading and are familiar to every hydrogeologist dealing with the "barometric efficiency" of observation wells. Moisture-loading effects are small and generally not recognized because they are obscured by hydraulic head fluctuations due to other causes, primarily barometric pressure changes. For semiconfined aquifers, long-term moisture-loading effects may be dissipated and obscured by transient flow through overlying aquitards. Removal of barometric and earth tide effects from observation well records allows identification of moisture loading and comparison with hydrological observations, and also comparison with the results of numerical models that can account for transient groundwater flow.

  8. Observer reliability in assessment of mitotic activity and MIB-1-determined proliferation rate in pediatric sarcomas

    Molenaar, W M; Plaat, B E; Berends, E R; te Meerman, G J


    In hematoxylin-eosin-stained sections of 20 pediatric sarcomas the mitotic index was assessed by four experienced pathologists and four less-experienced observers without prior instructions. In adjacent sections immunolabeled for MIB-1, the proliferation index was assessed as the estimated percentag

  9. Once is not enough : Establishing reliability criteria for feedback and evaluative decsions based on classroom observation

    van der Lans, Rikkert M.; van de Grift, Wim J.C.M.; van Veen, Klaas; Fokkens-Bruinsma, Marjon


    Implementation of effective teacher evaluation procedures is a global challenge in which lowering the chances that teachers receive inaccurate evaluations is a pertinent goal. This study investigates the minimum number of observations required to guarantee that teachers receive feedback with modest

  10. Medical record weight (MRW): a new reliable predictor of hospital stay, morbidity and mortality in the hip fracture population?

    Calpin, P


    We sought to compare the weight of patient’s medical records (MRW) to that of standardised surgical risk scoring systems in predicting postoperative hospital stay, morbidity, and mortality in patients with hip fracture. Patients admitted for surgical treatment of a newly diagnosed hip fracture over a 3-month period were enrolled. Patients with documented morbidity or mortality had significantly heavier medical records. The MRW was equivalent to the age-adjusted Charlson co-morbidity index and better than the American Society of Anaesthesiologists physical status score (ASA), the Physiological and Operative Severity Score for the enUmeration of Mortality and Morbidity (POSSUM,) and Portsmouth-POSSUM score (P-POSSUM) in correlation with length of hospital admission, p = .003, 95% CI [.15 to .65]. Using logistic regression analysis MRW was as good as, if not better, than the other scoring systems at predicting postoperative morbidity and 90-day mortality. Medical record weight is as good as, or better than, validated surgical risk scoring methods. Larger, multicentre studies are required to validate its use as a surgical risk prediction tool, and it may in future be supplanted by a digital measure of electronic record size. Given its ease of use and low cost, it could easily be used in trauma units globally.

  11. Observer-based reliable stabilization of uncertain linear systems subject to actuator faults, saturation, and bounded system disturbances.

    Fan, Jinhua; Zhang, Youmin; Zheng, Zhiqiang


    A matrix inequality approach is proposed to reliably stabilize a class of uncertain linear systems subject to actuator faults, saturation, and bounded system disturbances. The system states are assumed immeasurable, and a classical observer is incorporated for observation to enable state-based feedback control. Both the stability and stabilization of the closed-loop system are discussed and the closed-loop domain of attraction is estimated by an ellipsoidal invariant set. The resultant stabilization conditions in the form of matrix inequalities enable simultaneous optimization of both the observer gain and the feedback controller gain, which is realized by converting the non-convex optimization problem to an unconstrained nonlinear programming problem. The effectiveness of proposed design techniques is demonstrated through a linearized model of F-18 HARV around an operating point.

  12. Validity and inter-rater reliability of medio-lateral knee motion observed during a single-limb mini squat

    Ageberg, Eva; Bennell, Kim L; Hunt, Michael A


    Muscle function may influence the risk of knee injury and outcomes following injury. Clinical tests, such as a single-limb mini squat, resemble conditions of daily life and are easy to administer. Fewer squats per 30 seconds indicate poorer function. However, the quality of movement, such as the ......, such as the medio-lateral knee motion may also be important. The aim was to validate an observational clinical test of assessing the medio-lateral knee motion, using a three-dimensional (3-D) motion analysis system. In addition, the inter-rater reliability was evaluated....

  13. Reliability of different facial measurements for determination of vertical dimension of occlusion in edentulous using accepted facial dimensions recorded from dentulous subjects.

    Nagpal, Abhishek; Parkash, Hari; Bhargava, Akshay; Chittaranjan, B


    The study was undertaken to evaluate the reliability of different facial measurements for determination of vertical dimension of occlusion in edentulous subjects using accepted facial dimensions recorded from dentulous subjects. The hypothesis was that facial measurements can be used to obtain the vertical dimension of occlusion for edentulous patients where no pre-extraction records exist. A total of 180 subjects were selected in the age groups of 50-60 years, consisting of 75 dentate male and 75 dentate female subjects for whom different facial measurements were recorded including vertical dimension of occlusion and rest, and 15 edentulous male and 15 edentulous female subjects for whom all the facial measurements were recorded including the vertical dimension of rest and occlusion following construction of upper and lower complete dentures. The left outer canthus of eye to angle of mouth distance and the right Ear-Eye distance were found to be as valuable adjuncts in the determination of occlusal vertical dimension. The Glabella-Subnasion distance, the Pupil-Stomion distance, the Pupil-Rima Oris distance and the distance between the two Angles of the Mouth did not have a significant role in the determination of the occlusal vertical dimension. The vertical dimension can be determined with reasonable accuracy by utilizing other facial measurements for patients for whom no pre-extraction records exist.

  14. Observing the Testing Effect using Coursera Video-Recorded Lectures: A Preliminary Study.

    Yong, Paul Zhihao; Lim, Stephen Wee Hun


    We investigated the testing effect in Coursera video-based learning. One hundred and twenty-three participants either (a) studied an instructional video-recorded lecture four times, (b) studied the lecture three times and took one recall test, or (c) studied the lecture once and took three tests. They then took a final recall test, either immediately or a week later, through which their learning was assessed. Whereas repeated studying produced better recall performance than did repeated testing when the final test was administered immediately, testing produced better performance when the final test was delayed until a week after. The testing effect was observed using Coursera lectures. Future directions are documented.

  15. Observed periodicities and the spectrum of field variations in Holocene magnetic records

    Panovska, S.; Finlay, Chris; Hirt, A.M.


    In order to understand mechanisms that maintain and drive the evolution of the Earth's magnetic field, a characterization of its behavior on time scales of centuries to millennia is required. We have conducted a search for periodicities in Holocene sediment magnetic records, by applying three......, globally observed, periods. Rather we find a continuous broadband spectrum, with a slope corresponding to a power law with exponent of -2.3 ± 0.6 for the period range between 300 and 4000 yr. This is consistent with the hypothesis that chaotic convection in the outer core drives the majority of secular...

  16. On the reliability of observational measurements of column density probability distribution functions

    Ossenkopf, Volker; Schneider, Nicola; Federrath, Christoph; Klessen, Ralf S


    Probability distribution functions (PDFs) of column densities are an established tool to characterize the evolutionary state of interstellar clouds. Using simulations, we show to what degree their determination is affected by noise, line-of-sight contamination, field selection, and the incomplete sampling in interferometric measurements. We solve the integrals that describe the convolution of a cloud PDF with contaminating sources and study the impact of missing information on the measured column density PDF. The effect of observational noise can be easily estimated and corrected for if the root mean square (rms) of the noise is known. For $\\sigma_{noise}$ values below 40\\,\\% of the typical cloud column density, $N_{peak}$, this involves almost no degradation of the accuracy of the PDF parameters. For higher noise levels and narrow cloud PDFs the width of the PDF becomes increasingly uncertain. A contamination by turbulent foreground or background clouds can be removed as a constant shield if the PDF of the c...

  17. Intra- and inter-observer reliability of the application of the cellulite severity scale to a Spanish female population.

    De La Casa Almeida, M; Suarez Serrano, C; Jiménez Rejano, J J; Chillón Martínez, R; Medrano Sánchez, E M; Rebollo Roldán, J


    'Hexsel, dal'Forno and Hexsel Cellulite Severity Scale' (CSS) was developed to evaluate cellulite with an objective and easy to apply tool. Objective  Study CSS intra- and inter-observer reliability in a Spanish female population by evaluating patients' cellulite through photographs of their overall gluteofemoral zone as opposed to its creators who distinguished between buttocks and thigh. Cellulite Severity Scale was applied to 27 women, evaluating gluteofemoral cellulite, differentiating between left and right. Evaluations were made by three expert examiners each at three times with a 1-week separation. Variables were the five CSS dimensions (number of evident depressions; depth of depressions; morphological appearance of skin surface alterations; grade of laxity, flaccidity, or sagging skin; and the Nürnberger and Müller classification scale), and the overall CSS score. Cronbach's alpha, intra-class correlation and item total correlation were analysed. Cronbach's alpha values were 0.951 (right) and 0.944 (left). In the intra-observer reliability analysis, intra-class correlation coefficient ranged from 0.993 to 0.999 (P Cellulite Severity Scale has excellent reliability and internal consistency when used to evaluate cellulite on the buttocks and back of the thighs considered together. Nevertheless, the dimension grade of laxity, flaccidity or sagging skin does not contribute positively to the final consistency of the scale. This dimension needs to be analysed in greater depth in future studies. © 2012 The Authors. Journal of the European Academy of Dermatology and Venereology © 2012 European Academy of Dermatology and Venereology.

  18. Using Grounded Theory to Analyze Qualitative Observational Data that is Obtained by Video Recording

    Colin Griffiths


    Full Text Available This paper presents a method for the collection and analysis of qualitative data that is derived by observation and that may be used to generate a grounded theory. Video recordings were made of the verbal and non-verbal interactions of people with severe and complex disabilities and the staff who work with them. Three dyads composed of a student/teacher or carer and a person with a severe or profound intellectual disability were observed in a variety of different activities that took place in a school. Two of these recordings yielded 25 minutes of video, which was transcribed into narrative format. The nature of the qualitative micro data that was captured is described and the fit between such data and classic grounded theory is discussed. The strengths and weaknesses of the use of video as a tool to collect data that is amenable to analysis using grounded theory are considered. The paper concludes by suggesting that using classic grounded theory to analyze qualitative data that is collected using video offers a method that has the potential to uncover and explain patterns of non-verbal interactions that were not previously evident.

  19. On the non-equivalence of observables in phase-space reconstructions from recorded time series

    Letellier, C.; Maquet, J.; LeSceller, L.; Gouesbet, G.; Aguirre, L. A.


    In practical problems of phase-space reconstruction, it is usually the case that the reconstruction is much easier using a particular recorded scalar variable. This seems to contradict the general belief that all variables of a dynamical system are equivalent in phase-space reconstruction problems. This paper will argue that, in many cases, the choice of a particular scalar time series from which to reconstruct the original dynamics could be critical. It is argued that different dynamical variables do not provide the same level of information (observability) of the underlying dynamics and, as a consequence, the quality of a global reconstruction critically depends on the recorded variable. Examples in which the choice of observables is critical are discussed and the level of information contained in a given variable is quantified in the case where the original system is known. A clear example of such a situation arises in the Rössler system for which the performance of a global vector field reconstruction technique is investigated using time series of variables x, y or z, taken one at a time.

  20. Assimilation of SWOT Observations for the Creation of Spatially and Temporally Consistent Discharge Records

    Fisher, C. K.; Pan, M.; Wood, E. F.


    The Surface Water and Ocean Topography (SWOT) mission is designed to provide global estimates of water surface elevation, slope and discharge from space. This mission will provide increased spatial and temporal coverage compared to current altimeters. However, the temporal sampling is less frequent than current in-situ discharge observations. Thus, there is a need for methods that can utilize spatially and temporally inconsistent observations of discharge to reconstruct fields that are consistent in time and space. Using the Inverse Streamflow Routing (ISR) model of Pan and Wood [2013], discharge records are derived for the Ohio River basin using data assimilation with a fixed interval Kalman smoother. ISR utilizes observed (or SWOT retrieved) discharge values at discrete (gauge) locations to generate spatially and temporally distributed fields of runoff by inverting a linear routing model. These runoff fields are then routed to produce river discharge estimates throughout the basin. Three experiments have been carried out to evaluate assimilating SWOT observations. The experiments are: (1) assimilating 75 in-situ gauges only, (2) using 50 in-situ gauges and 25 SWOT-retrieved "gauges", and (3) using 75 SWOT-retrieved "gauges" only. The estimated discharges are compared to in-situ USGS gauge data from 2006 to 2009. Results show that the ISR assimilation method can be used to effectively reproduce the spatial and temporal dynamics of discharge in each of the experiments. In particular, the results of the SWOT-only data experiment indicate that despite the coarse temporal SWOT overpasses (0 to 3 over a 22 day period) significant discharge information throughout the entire basin can be retrieved. The ISR-SWOT assimilation approach will provide extremely useful discharge estimates, especially in sparsely gauged regions where spatially and temporally consistent discharge records are most valuable. Pan, M; Wood, E F 2013 Inverse streamflow routing, HESS 17(11):4577-4588

  1. Observation of Amorphous Recording Marks Using Reflection-Mode Near-Field Scanning Optical Microscope Supported by Optical Interference Method

    Sakai, Masaru; Mononobe, Shuji; Yusu, Keiichiro; Tadokoro, Toshiyasu; Saiki, Toshiharu


    A signal enhancing technique for a reflection-mode near-field scanning optical microscope (NSOM) is proposed. Optical interference between the signal light, from an aperture at the tip of a tapered optical fiber, and the reflected light, from a metallic coating around the aperture, enhances the signal intensity. We used a rewritable high-definition digital versatile disc (HD DVD) with dual recording layers as a sample medium, and demonstrated observation of amorphous recording marks on the semitransparent (the first) recording layer. In spite of low optical contrast between the crystal region and the amorphous region on this layer, we successfully observed recording marks with good contrast.

  2. Reliability and validity of pressure and temporal parameters recorded using a pressure-sensitive insole during running.

    Mann, Robert; Malisoux, Laurent; Brunner, Roman; Gette, Paul; Urhausen, Axel; Statham, Andrew; Meijer, Kenneth; Theisen, Daniel


    Running biomechanics has received increasing interest in recent literature on running-related injuries, calling for new, portable methods for large-scale measurements. Our aims were to define running strike pattern based on output of a new pressure-sensitive measurement device, the Runalyser, and to test its validity regarding temporal parameters describing running gait. Furthermore, reliability of the Runalyser measurements was evaluated, as well as its ability to discriminate different running styles. Thirty-one healthy participants (30.3 ± 7.4 years, 1.78 ± 0.10 m and 74.1 ± 12.1 kg) were involved in the different study parts. Eleven participants were instructed to use a rearfoot (RFS), midfoot (MFS) and forefoot (FFS) strike pattern while running on a treadmill. Strike pattern was subsequently defined using a linear regression (R(2)=0.89) between foot strike angle, as determined by motion analysis (1000 Hz), and strike index (SI, point of contact on the foot sole, as a percentage of foot sole length), as measured by the Runalyser. MFS was defined by the 95% confidence interval of the intercept (SI=43.9-49.1%). High agreement (overall mean difference 1.2%) was found between stance time, flight time, stride time and duty factor as determined by the Runalyser and a force-measuring treadmill (n=16 participants). Measurements of the two devices were highly correlated (R ≥ 0.80) and not significantly different. Test-retest intra-class correlation coefficients for all parameters were ≥ 0.94 (n=14 participants). Significant differences (prunning were detected regarding SI, stance time and stride time (n=24 participants). The Runalyser is suitable for, and easily applicable in large-scale studies on running biomechanics.

  3. Assessing the reliability and validity of direct observation and traffic camera streams to measure helmet and motorcycle use.

    Zaccaro, Heather N; Carbone, Emily C; Dsouza, Nishita; Xu, Michelle R; Byrne, Mary C; Kraemer, John D


    There is a need to develop motorcycle helmet surveillance approaches that are less labour intensive than direct observation (DO), which is the commonly recommended but never formally validated approach, particularly in developing settings. This study sought to assess public traffic camera feeds as an alternative to DO, in addition to the reliability of DO under field conditions. DO had high inter-rater reliability, κ=0.88 and 0.84, respectively, for cycle type and helmet type, which reinforces its use as a gold standard. However, traffic camera-based data collection was found to be unreliable, with κ=0.46 and 0.53 for cycle type and helmet type. When bicycles, motorcycles and scooters were classified based on traffic camera streams, only 68.4% of classifications concurred with those made via DO. Given the current technology, helmet surveillance via traffic camera streams is infeasible, and there remains a need for innovative traffic safety surveillance approaches in low-income urban settings.

  4. The dynamic relationship between emotional and physical states: an observational study of personal health records

    Lee, Ye-Seul; Jung, Won-Mo; Jang, Hyunchul; Kim, Sanghyun; Chung, Sun-Yong; Chae, Younbyoung


    Objectives Recently, there has been increasing interest in preventing and managing diseases both inside and outside medical institutions, and these concerns have supported the development of the individual Personal Health Record (PHR). Thus, the current study created a mobile platform called “Mind Mirror” to evaluate psychological and physical conditions and investigated whether PHRs would be a useful tool for assessment of the dynamic relationship between the emotional and physical conditions of an individual. Methods Mind Mirror was used to collect 30 days of observational data about emotional valence and the physical states of pain and fatigue from 20 healthy participants, and these data were used to analyze the dynamic relationship between emotional and physical conditions. Additionally, based on the cross-correlations between these three parameters, a multilevel multivariate regression model (mixed linear model [MLM]) was implemented. Results The strongest cross-correlation between emotional and physical conditions was at lag 0, which implies that emotion and body condition changed concurrently. In the MLM, emotional valence was negatively associated with fatigue (β =−0.233, Ppain (β =0.250, Ppain was positively associated with fatigue (β =0.398, Pemotional valence and one’s physical condition negatively influenced one another, while fatigue and pain positively affected each other. These findings suggest that the mind and body interact instantaneously, in addition to providing a possible solution for the recording and management of health using a PHR on a daily basis. PMID:28223814

  5. Observed diurnal variation changes of Jakarta precipitation from 144 available meteorological records

    Siswanto, Siswanto; van Oldenborgh, Geert Jan; van den Hurk, Bart; Jilderda, Rudmer


    Using a long available meteorological observation for almost 114 years hourly and daily record from Jakarta Observatory, the temporal heterogeneity of climate trends and its variability over Jakarta, Indonesia has been studied. The analyses showed that the number of wet days has decreased between 1880 and 2010, while the precipitation exceeding 50 mm is observed to be slightly increased. An increased trend of heavy rainfall in the 80% and 95% percentile between April and September was detected. Diurnal variation of Jakarta precipitation and temperature changed markedly. In the wet season (DJF), the morning rainfall has increased in intensity, while in other seasons; delayed amplitude of late afternoon rainfall peak is observed. The diurnal variation of night time temperature considerably increased while daytime temperature remains similar. Changes in temporal characteristics of light and heavy precipitation, as well as the diurnal variation of precipitation and temperature lead to hypotheses concerning anthropogenic influence. Some theoretical arguments on Urban Heat Island and aerosol effect precipitation could be linked to our results. Jakarta is a metropolitan city where its development is characterized by mixing of many different land uses and economic activities, including large-scale housing projects, industrial estates, and agricultural activities. In the future, the separation of local response to large scale and local changes will be investigated.

  6. Observing the Testing Effect using Coursera Video-recorded Lectures: A Preliminary Study

    Paul Zhihao eYONG


    Full Text Available We investigated the testing effect in Coursera video-based learning. One hundred and twenty-three participants either (a studied an instructional video-recorded lecture four times, (b studied the lecture three times and took one recall test, or (c studied the lecture once and took three tests. They then took a final recall test, either immediately or a week later, through which their learning was assessed. Whereas repeated studying produced better recall performance than did repeated testing when the final test was administered immediately, testing produced better performance when the final test was delayed until a week after. The testing effect was observed using Coursera lectures. Future directions are documented.

  7. Reliability of Heart Rate Variability Analysis by Using Electrocardiogram Recorded Unrestrainedly from an Automobile Steering-Wheel

    Osaka, Motohisa; Murata, Hiroshige; Tateoka, Katsuhiko; Katoh, Takao


    Some cases of traffic accidents are assumed to be due to the occurrences of cardiac events during driving, which are thought to be induced by imbalance of autonomic nervous activities. These can be measured by analyzing heart rate variability. Therefore, we developed a new system of steering-wheel electrocardiogram with a soft-ware to remove noises. We compared the trends of sympathetic and parasympathetic nerve activities measured from the steering-wheel electrocardiograms with those recorded simultaneously from chest leads. For each parameter of instantaneous heart rate, low- or high-frequency component of heart rate variability in all the cases, the trend from the steering-wheel electrocardiogram resembled that from the chest-lead electrocardiogram. In 3 of 7 subjects, the trend of LF/HF showed a strong relationship between the steering-wheel electrocardiogram and the chest-lead electrocardiogram. Our system will open doors to a new strategy to keep a driver out of a risk by notifying it while driving.

  8. Video Observations, Atmospheric Path, Orbit and Fragmentation Record of the Fall of the Peekskill Meteorite

    Ceplecha, Z.; Brown, P.; Hawkes, R. L.; Wertherill, G.; Beech, M.; Mossman, K.


    Large Near-Earth-Asteroids have played a role in modifying the character of the surface geology of the Earth over long time scales through impacts. Recent modeling of the disruption of large meteoroids during atmospheric flight has emphasized the dramatic effects that smaller objects may also have on the Earth's surface. However, comparison of these models with observations has not been possible until now. Peekskill is only the fourth meteorite to have been recovered for which detailed and precise data exist on the meteoroid atmospheric trajectory and orbit. Consequently, there are few constraints on the position of meteorites in the solar system before impact on Earth. In this paper, the preliminary analysis based on 4 from all 15 video recordings of the fireball of October 9, 1992 which resulted in the fall of a 12.4 kg ordinary chondrite (H6 monomict breccia) in Peekskill, New York, will be given. Preliminary computations revealed that the Peekskill fireball was an Earth-grazing event, the third such case with precise data available. The body with an initial mass of the order of 104 kg was in a pre-collision orbit with a = 1.5 AU, an aphelion of slightly over 2 AU and an inclination of 5‡. The no-atmosphere geocentric trajectory would have lead to a perigee of 22 km above the Earth's surface, but the body never reached this point due to tremendous fragmentation and other forms of ablation. The dark flight of the recovered meteorite started from a height of 30 km, when the velocity dropped below 3 km/s, and the body continued 50 km more without ablation, until it hit a parked car in Peekskill, New York with a velocity of about 80 m/s. Our observations are the first video records of a bright fireball and the first motion pictures of a fireball with an associated meteorite fall.

  9. Systematic social observation of children’s neighborhoods using Google Street View: a reliable and cost-effective method

    Odgers, Candice L.; Caspi, Avshalom; Bates, Christopher J.; Sampson, Robert J.; Moffitt, Terrie E.


    Background Children growing up in poor versus affluent neighborhoods are more likely to spend time in prison, develop health problems and die at an early age. The question of how neighborhood conditions influence our behavior and health has attracted the attention of public health officials and scholars for generations. Online tools are now providing new opportunities to measure neighborhood features and may provide a cost effective way to advance our understanding of neighborhood effects on child health. Method A virtual systematic social observation (SSO) study was conducted to test whether Google Street View could be used to reliably capture the neighborhood conditions of families participating in the Environmental-Risk (E-Risk) Longitudinal Twin Study. Multiple raters coded a subsample of 120 neighborhoods and convergent and discriminant validity was evaluated on the full sample of over 1,000 neighborhoods by linking virtual SSO measures to: (a) consumer based geo-demographic classifications of deprivation and health, (b) local resident surveys of disorder and safety, and (c) parent and teacher assessments of children’s antisocial behavior, prosocial behavior, and body mass index. Results High levels of observed agreement were documented for signs of physical disorder, physical decay, dangerousness and street safety. Inter-rater agreement estimates fell within the moderate to substantial range for all of the scales (ICCs ranged from .48 to .91). Negative neighborhood features, including SSO-rated disorder and decay and dangerousness corresponded with local resident reports, demonstrated a graded relationship with census-defined indices of socioeconomic status, and predicted higher levels of antisocial behavior among local children. In addition, positive neighborhood features, including SSO-rated street safety and the percentage of green space, were associated with higher prosocial behavior and healthy weight status among children. Conclusions Our results

  10. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik


    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  11. The dynamic relationship between emotional and physical states: an observational study of personal health records

    Lee YS


    Full Text Available Ye-Seul Lee,1 Won-Mo Jung,1 Hyunchul Jang,2 Sanghyun Kim,2 Sun-Yong Chung,3 Younbyoung Chae1 1Acupuncture and Meridian Science Research Center, College of Korean Medicine, Kyung Hee University, Seoul, 2Mibyeong Research Center, Korean Institute of Oriental Medicine, Daejeon, 3Department of Neuropsychiatry, College of Korean Medicine, Kyung Hee University, Seoul, Republic of Korea Objectives: Recently, there has been increasing interest in preventing and managing diseases both inside and outside medical institutions, and these concerns have supported the development of the individual Personal Health Record (PHR. Thus, the current study created a mobile platform called “Mind Mirror” to evaluate psychological and physical conditions and investigated whether PHRs would be a useful tool for assessment of the dynamic relationship between the emotional and physical conditions of an individual.Methods: Mind Mirror was used to collect 30 days of observational data about emotional valence and the physical states of pain and fatigue from 20 healthy participants, and these data were used to analyze the dynamic relationship between emotional and physical conditions. Additionally, based on the cross-correlations between these three parameters, a multilevel multivariate regression model (mixed linear model [MLM] was implemented.Results: The strongest cross-correlation between emotional and physical conditions was at lag 0, which implies that emotion and body condition changed concurrently. In the MLM, emotional valence was negatively associated with fatigue (β =-0.233, P<0.001, fatigue was positively associated with pain (β =0.250, P<0.001, and pain was positively associated with fatigue (β =0.398, P<0.001.Conclusion: Our study showed that emotional valence and one’s physical condition negatively influenced one another, while fatigue and pain positively affected each other. These findings suggest that the mind and body interact instantaneously, in

  12. High-frequency observations and source parameters of microearthquakes recorded at hard-rock sites

    Cranswick, Edward; Wetmiller, Robert; Boatwright, John


    We have estimated the source parameters of 53 microearthquakes recorded in July 1983 which were aftershocks of the Miramichi, New Brunswick, earthquake that occurred on 9 January 1982. These events were recorded by local three-component digital seismographs at 400 sps/component from 2-Hz velocity transducers sited directly on glacially scoured crystalline basement outcrop. Hypocentral distances are typically less than 5 km, and the hypocenters and the seven digital seismograph stations established all lie essentially within the boundaries of a granitic pluton that encompasses the faults that ruptured during the main shock and major aftershocks. The P-wave velocity is typically 5 km/sec at the surface and at least 6 km/sec at depths greater than about 1 km.The events have S-wave corner frequencies in the band 10 to 40 Hz, and the calculated Brune model seismic moments range from 1015 to 1018 dyne-cm. The corresponding stress drops are generally less than 1.0 bars, but there is considerable evidence that the seismic-source signals have been modified by propagation and/or site-effects. The data indicate: (a) there is a velocity discontinuity at 0.5 km depth; (b) the top layer has strong scattering/attenuating properties; (c) some source-receiver paths differentiate the propagated signal; (d) there is a hard-rock-site P-wave “fmax” between 50 and 100 Hz; and (e) some hard-rock sites are characterized by P-wave resonance frequencies in the range 50 to 100 Hz. Comparison of this dataset with the January 1982 New Brunswick digital seismograms which were recorded at sites underlain by several meters of low-velocity surface sediments suggests that some of the hard-rock-site phenomena listed above can be explained in terms of a layer-over-a-half-space model. For microearthquakes, this result implies that spectrally determined source dimension scales with site dimension (thickness of the layer). More generally, it emphasizes that it is very difficult to accurately observe

  13. New Interview and Observation Measures of the Broader Autism Phenotype: Description of Strategy and Reliability Findings for the Interview Measures.

    Parr, Jeremy R; De Jonge, Maretha V; Wallace, Simon; Pickles, Andrew; Rutter, Michael L; Le Couteur, Ann S; van Engeland, Herman; Wittemeyer, Kerstin; McConachie, Helen; Roge, Bernadette; Mantoulan, Carine; Pedersen, Lennart; Isager, Torben; Poustka, Fritz; Bolte, Sven; Bolton, Patrick; Weisblatt, Emma; Green, Jonathan; Papanikolaou, Katerina; Baird, Gillian; Bailey, Anthony J


    Clinical genetic studies confirm the broader autism phenotype (BAP) in some relatives of individuals with autism, but there are few standardized assessment measures. We developed three BAP measures (informant interview, self-report interview, and impression of interviewee observational scale) and describe the development strategy and findings from the interviews. International Molecular Genetic Study of Autism Consortium data were collected from families containing at least two individuals with autism. Comparison of the informant and self-report interviews was restricted to samples in which the interviews were undertaken by different researchers from that site (251 UK informants, 119 from the Netherlands). Researchers produced vignettes that were rated blind by others. Retest reliability was assessed in 45 participants. Agreement between live scoring and vignette ratings was very high. Retest stability for the interviews was high. Factor analysis indicated a first factor comprising social-communication items and rigidity (but not other repetitive domain items), and a second factor comprised mainly of reading and spelling impairments. Whole scale Cronbach's alphas were high for both interviews. The correlation between interviews for factor 1 was moderate (adult items 0.50; childhood items 0.43); Kappa values for between-interview agreement on individual items were mainly low. The correlations between individual items and total score were moderate. The inclusion of several factor 2 items lowered the overall Cronbach's alpha for the total set. Both interview measures showed good reliability and substantial stability over time, but the findings were better for factor 1 than factor 2. We recommend factor 1 scores be used for characterising the BAP.

  14. Investigating the links of internal and external reliability with the system conditionality in Gauss-Markov models with uncorrelated observations

    Prószyński, Witold


    The relationship between internal response-based reliability and conditionality is investigated for Gauss-Markov (GM) models with uncorrelated observations. The models with design matrices of full rank and of incomplete rank are taken into consideration. The formulas based on the Singular Value Decomposition (SVD) of the design matrix are derived which clearly indicate that the investigated concepts are independent of each other. The methods are presented of constructing for a given design matrix the matrices equivalent with respect to internal response-based reliability as well as the matrices equivalent with respect to conditionality. To analyze conditionality of GM models, in general being inconsistent systems, a substitute for condition number commonly used in numerical linear algebra is developed, called a pseudo-condition^number. Also on the basis of the SVD a formula for external reliability is proposed, being the 2-norm of a vector of parameter distortions induced by minimal detectable error in a particular observation. For systems with equal nonzero singular values of the design matrix, the formula can be expressed in terms of the index of internal response-based reliability and the pseudo-condition^number. With these measures appearing in explicit form, the formula shows, although only for the above specific systems, the character of the impact of internal response-based reliability and conditionality of the model upon its external reliability. Proofs for complementary properties concerning the pseudo-condition^number and the 2-norm of parameter distortions in systems with minimal constraints are given in the Appendices. Numerical examples are provided to illustrate the theory. Badany jest związek między niezawodnością wewnętrzną bazującą na odpowiedziach modelu a uwarunkowaniem układu dla modeli Gaussa-Markova z obserwacjami nieskorelowanymi. Rozpatrywane są przy tym modele z macierzami projektu pełnego i niepełnego rzędu. Wzory wyprowadzone

  15. Assessment of 10 Year Record of Aerosol Optical Depth from OMI UV Observations

    Ahn, Changwoo; Torres, Omar; Jethva, Hiren


    The Ozone Monitoring Instrument (OMI) onboard the EOS-Aura satellite provides information on aerosol optical properties by making use of the large sensitivity to aerosol absorption in the near-ultraviolet (UV) spectral region. Another important advantage of using near UV observations for aerosol characterization is the low surface albedo of all terrestrial surfaces in this spectral region that reduces retrieval errors associated with land surface reflectance characterization. In spite of the 13 × 24 square kilometers coarse sensor footprint, the OMI near UV aerosol algorithm (OMAERUV) retrieves aerosol optical depth (AOD) and single-scattering albedo under cloud-free conditions from radiance measurements at 354 and 388 nanometers. We present validation results of OMI AOD against space and time collocated Aerosol Robotic Network measured AOD values over multiple stations representing major aerosol episodes and regimes. OMAERUV's performance is also evaluated with respect to those of the Aqua-MODIS Deep Blue and Terra-MISR AOD algorithms over arid and semi-arid regions in Northern Africa. The outcome of the evaluation analysis indicates that in spite of the "row anomaly" problem, affecting the sensor since mid-2007, the long-term aerosol record shows remarkable sensor stability.

  16. Reliability and validity of the Visual Gait Assessment Scale for children with hemiplegic cerebral palsy when used by experienced and inexperienced observers.

    Brown, C R; Hillman, S J; Richardson, A M; Herman, J L; Robb, J E


    This study investigated the reliability and validity of the Visual Gait Assessment Scale when used by experienced and inexperienced observers. Four experienced and six inexperienced observers viewed videotaped footage of four children with hemiplegic cerebral palsy on two separate occasions. Validity of the Scale was obtained by comparison with three-dimensional gait analysis (3DGA). The experienced observers generally had higher inter-observer and intra-observer reliability than the inexperienced observers. Both groups showed higher agreement for assessments made at the ankle and foot than at the knee and hip. The experienced observers had slightly higher agreement with 3DGA than the inexperienced observers. The inexperienced observers showed a learning effect and had higher inter-observer agreement and higher agreement with 3DGA in the second assessment of the videotapes. This scale can be used by inexperienced observers but is limited to observations in the sagittal plane and by poor reliability at the knee and hip for experienced and inexperienced observers.

  17. RT-3 15 m diameter radiotelescope receiving and recording system for GPS white noise observations (some preliminary results).

    Pazderski, E.; Vorbrich, K. K.

    A short introduction explaining the idea of using the large VLBI radio telescope for GPS observations is enclosed. A description of the GPS - RT-3 Receiving and Recording Systems is given. Some GPS - RT-3 observational and computational results are presented.

  18. Measuring Physical Activity in Preschoolers: Reliability and Validity of the System for Observing Fitness Instruction Time for Preschoolers (SOFIT-P)

    Sharma, Shreela V.; Chuang, Ru-Jye; Skala, Katherine; Atteberry, Heather


    The purpose of this study is describe the initial feasibility, reliability, and validity of an instrument to measure physical activity in preschoolers using direct observation. The System for Observing Fitness Instruction Time for Preschoolers was developed and tested among 3- to 6-year-old children over fall 2008 for feasibility and reliability…

  19. Distribution of tetraether lipids in the 25-ka sedimentary record of Lake Challa: extracting reliable TEX86 and MBT/CBT palaeotemperatures from an equatorial African lake

    Sinninghe Damsté, Jaap S.; Ossebaar, Jort; Schouten, Stefan; Verschuren, Dirk


    The distribution of isoprenoid and branched glycerol dialkyl glycerol tetraether (GDGT) lipids was studied in the sedimentary record of Lake Challa, a permanently stratified, partly anoxic crater lake on the southeastern slope of Mt. Kilimanjaro (Kenya/Tanzania), to examine if the GDGTs could be used to reconstruct past variation in regional temperature. The study material comprised 230 samples from a continuous sediment sequence spanning the last 25 ka with excellent age control based on high-resolution AMS 14C dating. The distribution of GDGTs showed large variation through time. In some time intervals (i.e., from 20.4 to 15.9 ka BP and during the Younger Dryas, 12.9-11.7 ka BP) crenarchaeol was the most abundant GDGT, whereas at other times (i.e., during the Early Holocene) branched GDGTs and GDGT-0 were the major GDGT constituents. In some intervals of the sequence the relative abundance of GDGT-0 and GDGT-2 was too high to be derived exclusively from lacustrine Thaumarchaeota, suggesting a sizable contribution from methanogens and other archaea. This severely complicated application of TEX86 palaeothermometry in this lake, and limited reliable reconstruction of lake water temperature to the time interval 25-13 ka BP, i.e. the Last Glacial Maximum and the period of post-glacial warming. The TEX86-inferred timing of this warming is similar to that recorded previously in two of the large African rift lakes, while its magnitude is slightly or much higher than that recorded at these other sites, depending on which lake-based TEX86 calibration is used. Application of calibration models based on distributions of branched GDGTs developed for lakes inferred temperatures of 15-18 °C for the Last Glacial Maximum and 19-22 °C for the Holocene. However, the MBT/CBT palaeothermometer reconstructs temperatures as low as 12 °C for a Lateglacial period centred on 15 ka BP. Variation in down-core values of the BIT index are mainly determined by the varying production rate of

  20. Water and ice in asteroids: Connections between asteroid observations and the chondritic meteorite record

    Schmidt, B.; Dyl, K.


    , Schmidt & Castillo-Rogez 2012) if the chemical consequences can be reconciled (e.g., Young 2001, Young et al. 2003). Both models (Schmidt and Castillo-Rogez 2012) and experiments (e.g., Hiroi et al. 1996) suggest that water loss from asteroids is an important factor in interpreting the connections between the C-class asteroids and meteorites. The arrival of the Dawn spacecraft to Ceres will determine its much-debated internal structure and finally answer the following question: did large, icy planetesimals form and thermally evolve in the inner solar system? Even if Ceres is not icy, Dawn observations will shed light on its surface composition, and by extension on the surfaces of objects with similar surface properties. This presentation will focus on tying the observational evidence for water on evolving and contemporary asteroids with detailed studies of the carbonaceous chondrites in an effort to synthesize physical and chemical realities with the observational record, bridging the gap between the asteroid and meteorite communities.

  1. Validity and Reliability of Direct Observation of Procedural Skills in Evaluating the Clinical Skills of Nursing Students of Zahedan Nursing and Midwifery School

    Mohamad Sahebalzamani


    Full Text Available Background: To evaluate the validity and reliability of assessing the performance of nursing students using the Direct Observation of Procedural Skills (DOPS.Materials and Method: This research was conducted on 55 nursing internship students in 8 procedures. A DOPS consisted of an assessor observing a student when performing skills, completing a checklist with the student and providing verbal feedback. The procedures were selected among the core skills of nursing according to the views of faculty members. Content validity, criterion validity (correlation the average scores of nursing clinical and theoretical courses separately with DOPS score, relation of each item with DOPS, construct validity (inspection of internal construction, reliability (examination of internal consistency, inter-rater reliability were examined. Results: Correlation of DOPS scores with the theoretical and clinical average scores were 0.117 (p=0.429 and 0.376 (p= 0.008 respectively. There has been a significant relation between each skill and DOPS total score (p= 0.001 that indicates a desired internal construction of the exercise. The reliability of the exercise was measured as 94% by Cronbach alpha coefficient. Minimum and maximum correlation coefficient in the inter-rater reliability were 42% and 84% respectively which were significant in all cases (p=0 .001. Conclusion: In conclusion, our results showed that DOPS has the validity and reliability for objective evaluation of procedural skills in nursing

  2. Maturity Matrices for Quality of Model- and Observation-Based Climate Data Records

    Höck, Heinke; Kaiser-Weiss, Andrea; Kaspar, Frank; Stockhause, Martina; Toussaint, Frank; Lautenschlager, Michael


    In the field of Software Engineering the Capability Maturity Model is used to evaluate and improve software development processes. The application of a Maturity Matrix is a method to assess the degree of software maturity. This method was adapted to the maturity of Earth System data in scientific archives. The application of such an approach to Climate Data Records was first proposed in the context of satellite-based climate products and applied by NOAA and NASA. The European FP7 project CORE-CLIMAX suggested and tested extensions of the approach in order to allow the applicability to additional climate datasets, e.g. based on in-situ observations as well as model-based reanalysis. Within that project the concept was applied to products of satellite- and in-situ based datasets. Examples are national ground-based data from Germany as an example for typical products of a national meteorological service, the EUMETSAT Satellite Application Facility Network, the ESA Climate Change Initiative, European Reanalysis activities (ERA-CLIM) and international in situ-based climatologies such as GPCC, ECA&D, BSRN, HadSST. Climate models and their related output have some additional characteristics that need specific consideration in such an approach. Here we use examples from the World Data Centre for Climate (WDCC) to discuss the applicability. The WDCC focuses on climate data products, specifically those resulting from climate simulations. Based on these already existing Maturity Matrix models, WDCC developed a generic Quality Assessment System for Earth System data. A self-assessment is performed using a maturity matrix evaluating the data quality for five maturity levels with respect to the criteria data and metadata consistency, completeness, accessibility and accuracy. The classical goals of a quality assessment system in a data processing workflow are: (1) to encourage data creators to improve quality to reach the next quality level, (2) enable data consumers to decide

  3. [Formalizing observation: The emergence of the modern patient record exemplified by Berlin and Paris medicine, 1725-1830].

    Hess, Volker


    The paper focuses on the material basis of the development of modern clinical documentation. With the examples of Berlin and Paris medicine, it analyzes the various ways of recording clinical data in the 18th century, from where they came, and how they were introduced into bedside observations. Particular interest is given to the interrelation between administrative techniques (registration, book-keeping etc.) and the practices of medical recording developed within the hospitals. Comparing Berlin and Paris makes it possible to work out the differences in writing cultures and to consider the local interdependencies. With this approach it can be demonstrated that the "patient record" was already established as a patient related recording system in the form of loose files in the early 19th century.

  4. High-Precise Gravity Observations at Archaeological Sites: How We Can Improve the Interpretation Effectiveness and Reliability?

    Eppelbaum, Lev


    Microgravity investigations are comparatively rarely used for searching of hidden ancient targets (e.g., Eppelbaum, 2013). It is caused mainly by small geometric size of the desired archaeological objects and various types of noise complicating the observed useful signal. At the same time, development of modern generation of field gravimetric equipment allows to register microGal (10-8 m/s2) anomalies that offer a new challenge in this direction. Correspondingly, an accuracy of gravity variometers (gradientometers) is also sharply increased. How we can improve the interpretation effectiveness and reliability? Undoubtedly, it must be a multi-stage process. I believe that we must begin since nonconventional methodologies for reducing topographic effect and terrain correction computation. Topographic effect reducing The possibilities of reducing topographic effects by grouping the points of additional gravimetric observations around the central point located on the survey network were demonstrated in (Khesin et al., 1996). A group of 4 to 8 additional points is located above and below along the relief approximately symmetrically and equidistant from the central point. The topographic effect is reduced to the obtained difference between the gravity field in the center of the group and its mean value for the whole group. Application of this methodology in the gold-pyrite deposit Gyzyl-Bulakh (Lesser Caucasus, western Azerbaijan) indicated its effectiveness. Computation of terrain correction Some geophysicists compare the new ideas in the field of terrain correction (TC) in gravimetry with the 'perpetuum mobile' invention. However, when we speak about very detailed gravity observations, the problem of most optimal computation of surrounding relief influence is of a great importance. Let us will consider two approaches applied earlier in ore geophysics. First approach A first method was applied in the Gyzyl-Bulakh gold-pyrite deposit situated in the Mekhmana ore region of

  5. The effect of subionospheric propagation on whistlers recorded by the DEMETER satellite – observation and modelling

    F. Lefeuvre


    Full Text Available During a routine analysis of whistlers on the wide-band VLF recording of the DEMETER satellite, a specific signal structure of numerous fractional-hop whistlers, termed the "Spiky Whistler" (SpW was identified. These signals appear to be composed of a conventional whistler combined by the compound mode-patterns of guided wave propagation, suggesting a whistler excited by a lightning "tweek" spheric. Rigorous, full-wave modelling of tweeks, formed by the long subionospheric guided spheric propagation and of the impulse propagation across an arbitrarily inhomogeneous ionosphere, gave an accurate description of the SpW signals. The electromagnetic impulses excited by vertical, preferably CG lightning discharge, exhibited the effects of guided behaviour and of the dispersive ionospheric plasma along their paths. This modelling and interpretation provides a consistent way to determine the generation and propagation characteristics of the recorded SpW signals, as well as to describe the traversed medium.

  6. Assessment of neuropathic pain in patients with cancer: the interobserver reliability. An observational study in daily practice

    Timmerman, H.; Heemstra, I.; Schalkwijk, A.; Verhagen, C.; Vissers, K.; Engels, Y.


    BACKGROUND: Neuropathic pain (NeP) is a burdensome problem in all stages of cancer. Although clinical judgment is accepted as a surrogate for an objective gold standard in diagnosing NeP, no publications were found about its reliability. OBJECTIVES: Therefore, levels of agreement on the clinical exa

  7. Intra-observer and interobserver reliability of the 'Pico' computed tomography method for quantification of glenoid bone defect in anterior shoulder instability

    Magarelli, Nicola; Sergio, Pietro; Bonomo, Lorenzo [Catholic University, Department of Radiology, Rome (Italy); Milano, Giuseppe; Santagada, Domenico A.; Fabbriciani, Carlo [Catholic University, Department of Orthopaedics, Rome (Italy)


    To evaluate the intra-observer and interobserver reliability of the 'Pico' computed tomography (CT) method of quantifying glenoid bone defects in anterior glenohumeral instability. Forty patients with unilateral anterior shoulder instability underwent CT scanning of both shoulders. Images were processed in multiplanar reconstruction (MPR) to provide an en face view of the glenoid. In accordance with the Pico method, a circle was drawn on the inferior part of the healthy glenoid and transferred to the injured glenoid. The surface of the missing part of the circle was measured, and the size of the glenoid bone defect was expressed as a percentage of the entire circle. Each measurement was performed three times by one observer and once by a second observer. Intra-observer and interobserver reliability were analyzed using intraclass correlation coefficients (ICCs), 95% confidence intervals (CIs), and standard errors of measurement (SEMs). Analysis of intra-observer reliability showed ICC values of 0.94 (95% CI = 0.89-0.96; SEM = 1.1%) for single measurement, and 0.98 (95% CI = 0.96-0.99; SEM = 1.0%) for average measurement. Analysis of interobserver reliability showed ICC values of 0.90 (95% CI = 0.82-0.95; SEM = 1.0%) for single measurement, and 0.95 (95% CI = 0.90-0.97; SEM = 1.0%) for average measurement. Measurement of glenoid bone defect in anterior shoulder instability can be assessed with the Pico method, based on en face images of the glenoid processed in MPR, with a very good intra-observer and interobserver reliability. (orig.)

  8. New distribution records of Mesoclemmys vanderhaegei (Testudines: Chelidae from southeastern Brazil, including observations on reproduction

    Fábio Maffei


    Full Text Available Mesoclemmys vanderhaegei is a poorly known freshwater turtle widely distributed in central South America, where it occurs in Argentina, Paraguay, Brazil, and probably Bolivia.  It is considered “Near Threatened” by the IUCN Red List and “Data Deficient” by other local lists. Herein, we present new records and data on the reproductive biology of Mesoclemmys vanderhaegei in southeastern Brazil. 

  9. Intra- and inter-observer reliability of determining radiographic sagittal parameters of the spine and pelvis using a manual and a computer-assisted methods.

    Dimar, John R; Carreon, Leah Y; Labelle, Hubert; Djurasovic, Mladen; Weidenbaum, Mark; Brown, Courtney; Roussouly, Pierre


    Sagittal imbalance is a significant factor in determining clinical treatment outcomes in patients with deformity. Measurement of sagittal alignment using the traditional Cobb technique is frequently hampered by difficulty in visualizing landmarks. This report compares traditional manual measurement techniques to a computer-assisted sagittal plane measurement program which uses a radius arc methodology. The intra and inter-observer reliability of the computer program has been shown to be 0.92-0.99. Twenty-nine lateral 90 cm radiographs were measured by a computer program for an array of sagittal plane measurements. Ten experienced orthopedic spine surgeons manually measured the same parameters twice, at least 48 h apart, using a digital caliper and a standardized radiographic manual. Intraclass correlations were used to determine intra- and interobserver reliability between different manual measures and between manual measures and computer assisted-measures. The inter-observer reliability between manual measures was poor, ranging from -0.02 to 0.64 for the different sagittal measures. The intra-observer reliability in manual measures was better ranging from 0.40 to 0.93. Comparing manual to computer-assisted measures, the ICC ranged from 0.07 to 0.75. Surgeons agreed more often with each other than with the machine when measuring the lumbar curve, the thoracic curve, and the spino-sacral angle. The reliability of the computer program is significantly higher for all measures except for lumbar lordosis. A computer-assisted program produces a reliable measurement of the sagittal profile of the spine by eliminating the need for distinctly visible endplates. The use of a radial arc methodology allows for infinite data points to be used along the spine to determine sagittal measurements. The integration of this technique with digital radiography's ability to adjust image contrast and brightness will enable the superior identification of key anatomical parameters normally

  10. Cardiac valve calcifications on low-dose unenhanced ungated chest computed tomography: inter-observer and inter-examination reliability, agreement and variability

    Hamersvelt, Robbert W. van; Willemink, Martin J.; Takx, Richard A.P.; Eikendal, Anouk L.M.; Budde, Ricardo P.J.; Leiner, Tim; Jong, Pim A. de [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Mol, Christian P.; Isgum, Ivana [University Medical Center Utrecht, Image Sciences Institute, Utrecht (Netherlands)


    To determine inter-observer and inter-examination variability for aortic valve calcification (AVC) and mitral valve and annulus calcification (MC) in low-dose unenhanced ungated lung cancer screening chest computed tomography (CT). We included 578 lung cancer screening trial participants who were examined by CT twice within 3 months to follow indeterminate pulmonary nodules. On these CTs, AVC and MC were measured in cubic millimetres. One hundred CTs were examined by five observers to determine the inter-observer variability. Reliability was assessed by kappa statistics (κ) and intra-class correlation coefficients (ICCs). Variability was expressed as the mean difference ± standard deviation (SD). Inter-examination reliability was excellent for AVC (κ = 0.94, ICC = 0.96) and MC (κ = 0.95, ICC = 0.90). Inter-examination variability was 12.7 ± 118.2 mm{sup 3} for AVC and 31.5 ± 219.2 mm{sup 3} for MC. Inter-observer reliability ranged from κ = 0.68 to κ = 0.92 for AVC and from κ = 0.20 to κ = 0.66 for MC. Inter-observer ICC was 0.94 for AVC and ranged from 0.56 to 0.97 for MC. Inter-observer variability ranged from -30.5 ± 252.0 mm{sup 3} to 84.0 ± 240.5 mm{sup 3} for AVC and from -95.2 ± 210.0 mm{sup 3} to 303.7 ± 501.6 mm{sup 3} for MC. AVC can be quantified with excellent reliability on ungated unenhanced low-dose chest CT, but manual detection of MC can be subject to substantial inter-observer variability. Lung cancer screening CT may be used for detection and quantification of cardiac valve calcifications. (orig.)

  11. Simultaneous auroral observations described in the historical records of China, Japan and Korea from ancient times to AD 1700

    D. M. Willis

    Full Text Available Early auroral observations recorded in various oriental histories are examined in order to search for examples of strictly simultaneous and indisputably independent observations of the aurora borealis from spatially separated sites in East Asia. In the period up to ad 1700, only five examples have been found of two or more oriental auroral observations from separate sites on the same night. These occurred during the nights of ad 1101 January 31, ad 1138 October 6, ad 1363 July 30, ad 1582 March 8 and ad 1653 March 2. The independent historical evidence describing observations of mid-latitude auroral displays at more than one site in East Asia on the same night provides virtually incontrovertible proof that auroral displays actually occurred on these five special occasions. This conclusion is corroborated by the good level of agreement between the detailed auroral descriptions recorded in the different oriental histories, which furnish essentially compatible information on both the colour (or colours of each auroral display and its approximate position in the sky. In addition, the occurrence of auroral displays in Europe within two days of auroral displays in East Asia, on two (possibly three out of these five special occasions, suggests that a substantial number of the mid-latitude auroral displays recorded in the oriental histories are associated with intense geomagnetic storms.

    Key words. Magnetospheric physics (auroral phenomena; storms and substorms

  12. Sea Ice Back to 1850: A Longer Observational Record for Assimilation By Models and Use In Reanalyses

    Fetterer, Florence; Walsh, John; Chapman, William; Stewart, J. Scott


    Gridded Monthly Sea Ice Extent and Concentration, 1850 Onward is the title of a new data set available from the U.S. National Snow and Ice Data Center. Observations from 13 historical sources such as whaling ship logs, compilations by naval oceanographers, and analyses by national ice services cover 1850 through 1978, while 1979-2013 ice concentration fields are derived from satellite passive microwave data. The sea ice concentration and source variables are provided in a NetCDF-4 file. The observation-based data product meets a need for longer records to use in reanalysis and climate diagnostic applications. It extends the record of an earlier version of this pan-Arctic data set that is heavily used by modelers, and improves upon it by incorporating newly available historical sources, using a more accurate data set for the satellite era, and by filling temporal gaps using an analog method. The resulting sea ice concentration fields have realistic values and variability throughout the record; in earlier versions, unvarying climatological values often fill gaps. The historical data vary greatly in their observational methods and came to us as both original data (e.g. a transcription of shipboard ice observations), or as observations to which some synthesis or analysis has already been applied (e.g. the Danish Meteorological Instituted yearbooks of charts). Each required different treatment before it could be used in our product, ranging from simple regridding to digitization and interpretation. The current version spans 1850-2013. With it, we can more confidently address questions like "Is the diminished ice cover of the past few years unique to the period since 1850?" And "Is the rapidity of the retreat of ice in the years since 2000 unique in the longer historical record?" We hope to continue improving the product with refinements to the gap filling method, additional historical sources, and assessment of the consistency of pre and post satellite period data, and

  13. Observing, recording, and reviewing: Using mobile phones in support of science inquiry

    Khoo, Elaine; Williams, John; Otrel-Cass, Kathrin


    Teaching science can be challenging, particularly if it involves the incorporation of inquiry approaches. Collaboration and co-construction of ideas and understandings requires changing teaching and learning practices to allow students to learn how to collaborate ‘inquiry style......’. There is increasing evidence that the use of mobile learning devices can support inquiry learning by increasing the opportunities for student participation and collaboration in the learning process. This paper reports on the preliminary findings from a New Zealand Teaching and Learning Initiative funded project...... questions and investigations, and increased student ownership of their learning. Sharing the mobile phone recordings of their learning with their peers and community further enriched students’ developing science understandings beyond the classroom....

  14. Observations of asexual reproductive strategies in Antarctic hexactinellid sponges from ROV video records

    Teixidó, Núria; Gili, Josep-Maria; Uriz, María-J.; Gutt, Julian; Arntz, Wolf E.


    Hexactinellid sponges are one of the structuring taxa of benthic communities on the Weddell Sea shelf (Antarctica). However, little is known about their reproduction patterns (larval development, release, settlement, and recruitment), particularly in relation to sexual and asexual processes in sponge populations. Video stations obtained during several expeditions covering a wide depth range and different areas recorded a high frequency of asexual reproductive strategies (ARS) (bipartition and budding) among hexactinellids. Analysis of seabed video strips between 108 and 256 m depth, representing an area of 1400 m 2, showed that about 28% of these sponges exhibited ARS. The Rossella nuda type dominated most of the video stations and exhibited the highest proportion of budding (35%). This proportion increased with the size class. Size class >20 cm exhibited in all the stations a mean value of 8.3±0.7 (SE) for primary and of 2.5±0.2 (SE) for secondary propagules per sponge, respectively. Results from a shallow station (Stn 059, 117 m depth) showed the highest relative abundance of R. nuda type and budding (>20 cm ˜72%, 10-20 cm ˜60%, 5-10 cm ˜12%, and asexual reproduction in hexactinellid sponges may be more frequent than has been thought before and it may greatly influence the genetic structure of populations.

  15. Combined Approach to the Analysis of Rainfall Super-Extremes in Locations with Limited Observational Records.

    Lakshmi, V.; Libertino, A.; Sharma, A.; Claps, P.


    obtained with the classic techniques of frequency analysis and spatial interpolation, demonstrate an increased knowledge coming from satellite, climate and local factors, ensuring more reliable and accurate spatial assessment of extreme thunderstorm probability.


    W. Wagner


    Full Text Available Soil moisture was recently included in the list of Essential Climate Variables (ECVs that are deemed essential for IPCC (Intergovernmental Panel on Climate Change and UNFCCC (United Nations Framework Convention on Climate Change needs and considered feasible for global observation. ECVs data records should be as long, complete and consistent as possible, and in the case of soil moisture this means that the data record shall be based on multiple data sources, including but not limited to active (scatterometer and passive (radiometer microwave observations acquired preferably in the low-frequency microwave range. Among the list of sensors that can be used for this task are the C-band scatterometers on board of the ERS and METOP satellites and the multi-frequency radiometers SMMR, SSM/I, TMI, AMSR-E, and Windsat. Together, these sensors already cover a time period of more than 30 years and the question is how can observations acquired by these sensors be merged to create one consistent data record? This paper discusses on a high-level possible approaches for fusing the individual satellite data. It is argued that the best possible approach for the fusion of the different satellite data sets is to merge Level 2 soil moisture data derived from the individual satellite data records. This approach has already been demonstrated within the WACMOS project ( funded by European Space Agency (ESA and will be further improved within the Climate Change Initiative (CCI programme of ESA (

  17. A summary of observational records on periodicities above the rotational period in the Jovian magnetosphere

    E. A. Kronberg


    Full Text Available The Jovian magnetosphere is a very dynamic system. The plasma mass-loading from the moon Io and the fast planetary rotation lead to regular release of mass from the Jovian magnetosphere and to a change of the magnetic topology. These regular variations, most commonly on several (2.5–4 days scale, were derived from various data sets obtained by different spacecraft missions and instruments ranging from auroral images to in situ measurements of magnetospheric particles. Specifically, ion measurements from the Galileo spacecraft represent the periodicities, very distinctively, namely the periodic thinning of the plasma sheet and subsequent dipolarization, and explosive mass release occurring mainly during the transition between these two phases. We present a review of these periodicities, particularly concentrating on those observed in energetic particle data. The most distinct periodicities are observed for ions of sulfur and oxygen. The periodic topological change of the Jovian magnetosphere, the associated mass-release process and auroral signatures can be interpreted as a global magnetospheric instability with analogies to the two step concept of terrestrial substorms. Different views on the triggering mechanism of this magnetospheric instability are discussed.

  18. The importance of ants in cave ecology, with new records and behavioral observations of ants in Arizona caves

    Robert B. Pape


    Full Text Available The importance of ants as elements in cave ecology has been mostly unrecognized. A global list of ant species recorded from caves, compiled from a review of existing literature, is presented. This paper also reviews what is currently known about ants occurring in Arizona (USA caves. The diversity and distribution represented in these records suggests ants are relatively common cave visitors (trogloxenes. A general utilization of caves by ants within both temperate and tropical latitudes may be inferred from this combined evidence. Observations of ant behavior in Arizona caves demonstrate a low level and sporadic, but persistent, use of these habitats and their contained resources by individual ant colonies. Documentation of Neivamyrmex sp. preying on cave-inhabiting arthropods is reported here for the first time. Observations of hypogeic army ants in caves suggests they may not penetrate to great vertical depth in search of prey, but can be persistent occupants in relatively shallow, horizontal sections of caves where they may prey on endemic cave animals. First cave records for ten ant species are reported from Arizona caves. These include two species of Neivamyrmex (N. nigrescens Cresson and Neivamyrmex sp.; Formicidae: Dorylinae, four myrmicines (Pheidole portalensis Wilson, Pheidole cf. porcula Wheeler, Solenopsis aurea Wheeler and Stenamma sp. Westwood, one dolichoderine (Forelius keiferi Wheeler and three formicines (Lasius arizonicus Wheeler, L. sitiens Wilson, and Camponotus sp. Mayr.

  19. Validation of a 30+ year soil moisture record from multi-satellite observations

    de Jeu, R.; Dorigo, W.; Wagner, W.; Chung, D.; Parinussa, R.; van der Werf, G.; Liu, Y.; Mittelbach, H.; Hirschi, M.


    As part of the ESA Climate Change Initiative soil moisture project a 30+ year consistent soil moisture dataset is currently in development by harmonizing retrievals from both passive and active microwave satellite observations. The harmonization of these datasets incorporates the advantage of both microwave techniques and spans the entire period from 1978 onwards. A statistical methodology based on scaling, ranking and blending was developed to address differences in sensor specifications to create one consistent dataset. A soil moisture dataset provided by a land surface model (GLDAS-1-Noah) was used to scale the different satellite-based products to the same range. The blending of the active and passive datasets was based on their respective performance, which is closely related to vegetation cover. While this approach imposes the absolute values of the land surface model dataset to the final product, it preserves the relative dynamics (e.g., seasonality, inter-annual variations) and trends of the original satellite derived retrievals. Different validation methods were performed to quantify the skill of the various soil moisture datasets at different temporal and spatial scales. In situ data from the International Soil Moisture Network (ISMN) were used to calculate the local correlation (both Pearson and Spearman) and Root Mean Square Difference between ground observations and the satellite retrievals for different climate regimes. In addition a triple collocation analysis was applied on the passive and active satellite products in order to analyze the error structures at a global scale for the different sensors. Furthermore, indirect proxies like tree ring width data were used to study the consistency of the inter-annual variability within the 30+ year dataset. The combination of these techniques revealed a strong dynamical behavior in data quality in both time and space. In the future this additional information on error dynamics could be used to further

  20. A decadal microwave record of tropical air temperature from AMSU-A/aqua observations

    Shi, Yuan; Li, King-Fai; Yung, Yuk L.; Aumann, Hartmut H.; Shi, Zuoqiang; Hou, Thomas Y.


    Atmospheric temperature is one of the most important climate variables. This observational study presents detailed descriptions of the temperature variability imprinted in the 9-year brightness temperature data acquired by the Advanced Microwave Sounding Unit-Instrument A (AMSU-A) aboard Aqua since September 2002 over tropical oceans. A non-linear, adaptive method called the Ensemble Joint Multiple Extraction has been employed to extract the principal modes of variability in the AMSU-A/Aqua data. The semi-annual, annual, quasi-biennial oscillation (QBO) modes and QBO-annual beat in the troposphere and the stratosphere have been successfully recovered. The modulation by the El Niño/Southern oscillation (ENSO) in the troposphere was found and correlates well with the Multivariate ENSO Index. The long-term variations during 2002-2011 reveal a cooling trend (-0.5 K/decade at 10 hPa) in the tropical stratosphere; the trend below the tropical tropopause is not statistically significant due to the length of our data. A new tropospheric near-annual mode (period ~1.6 years) was also revealed in the troposphere, whose existence was confirmed using National Centers for Environmental Prediction Reanalysis air temperature data. The near-annual mode in the troposphere is found to prevail in the eastern Pacific region and is coherent with a near-annual mode in the observed sea surface temperature over the Warm Pool region that has previously been reported. It remains a challenge for climate models to simulate the trends and principal modes of natural variability reported in this work.

  1. The Gumbel hypothesis test for left censored observations using regional earthquake records as an example

    E. M. Thompson


    Full Text Available Annual maximum (AM time series are incomplete (i.e., censored when no events are included above the assumed censoring threshold (i.e., magnitude of completeness. We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC. Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.

  2. Conventional Point-Velocity Records and Surface Velocity Observations for Estimating High Flow Discharge

    Giovanni Corato


    Full Text Available Flow velocity measurements using point-velocity meters are normally obtained by sampling one, two or three velocity points per vertical profile. During high floods their use is inhibited due to the difficulty of sampling in lower portions of the flow area. Nevertheless, the application of standard methods allows estimation of a parameter, α, which depends on the energy slope and the Manning roughness coefficient. During high floods, monitoring of velocity can be accomplished by sampling the maximum velocity, umax, only, which can be used to estimate the mean flow velocity, um, by applying the linear entropy relationship depending on the parameter, M, estimated on the basis of historical observed pairs (um, umax. In this context, this work attempts to analyze if a correlation between α and M holds, so that the monitoring for high flows can be addressed by exploiting information from standard methods. A methodology is proposed to estimate M from α, by coupling the “historical” information derived by standard methods, and “new” information from the measurement of umax surmised at later times. Results from four gauged river sites of different hydraulic and geometric characteristics have shown the robust estimation of M based on α.

  3. Seismogram Construction to Fit the Recorded B032593c Earthquake, Japan on Observation Station Bfo, Germany

    Bagus Jaya Santosa


    Full Text Available In this research the model of earth layers between earthquake's epicenter in Hokkaido Japan and observation station in Black Forest of Observatory (BFO, Germany is investigated. The earth model is 1-D that represents the average speed model. The earth model is obtained by seismogram comparison between data and synthetic seismogram in time domain and three components simultaneously. Synthetic Seismogram is calculated with the Green's function of the Earth by MINor Integration (GEMINI program, where program's input is initially the earth model IASPEI91, PREMAN and also the Centroid Moment Tensor (CMT solution of the earthquake. A Butterworth low-pass filter with corner frequency of 20 mHz is imposed to measured and synthetic seismogram. On seismogram comparison we can find unsystematic discrepancies, covering the travel time and waveform of all wave phases, namely on P, S, SS wave and surface wave of Rayleigh and Love. Solution to the above mentioned discrepancies needs correction to the earth structure, that covering the change of earth crust thickness, the gradient of βh and value of zero order coefficient in βh and βv in upper mantle, to get the fitting on the surface wave of Love and Rayleigh. Further correction to accomplish the discrepancies on body waves is conducted on layers beneath upper mantle down to depth of 630 km, where a little change at speed model of P and S wave is carried out. The number of oscillation amount especially on Love wave is influenced by earth crust depth earth. Good fitting is obtained at phase and amplitude of Love wave, but also at amplitude of some body wave too. This effect is not yet been exploited for the determination of moment tensor.

  4. 空间图像存储器NAND Flash的可靠性%Reliability of space image recorder based on NAND flash memory

    李进; 金龙旭; 韩双丽; 李国宁; 王文华


    For the unreliable data storage problem caused by bad blocks and single event upsets for the NAND flash memory in a space camera, this paper explores a bad block management strategy and an error correction algorithm. Firstly, the bad block management strategy based on parallel double-traverse mechanism was proposed by analyzing the characteristics of structure and operation for the NAND flash memory, the design ideas of the double traverse mechanism were described and its effectiveness was analyzed. Then, the error correction algorithm based on the shortened code RS (246, 240) and RS (134,128) in the field GF (28) was proposed, and the encode/decode algorithm and corresponding circuits were given. Finally, the verification experiments on an image storage platform in the prototype machine for a space multi-spectral camera were carried out. The experimental results show that the bad block management strategy can fast and reliably dispose the bad block events, and the algorithm can identify the bad blocks in one system clock period. The error correction algorithm can correct 27 B error within the 2 KB/page with a encoder speed of 72. 53 MBps and a decoder speed of 54. 26 MBps. Proposed stratege effectively solve the problem of unreliable recording data in the NAND flash memory.%针对空间相机中的图像存储器NAND Flash由于坏块和单粒子翻转导致存储数据不可靠的问题,研究了Flash坏块的管理策略和纠错算法.分析了Flash结构和工作特点,提出了基于并行双遍历机制的坏块管理策略,阐述了双遍历机制的设计思想并分析了它的有效性.在分析Flash结构和纠错特点的基础上,提出了在域GF(28)上的缩短码RS(246,240)+RS(134,128)纠错算法,并说明了编解码算法思想和实现电路.最后,在一空间多光谱相机样机的图像存储设备上进行了试验验证.结果表明,管理策略能快速可靠地处理坏块事件,每次操作仅需1个系统时钟周期即可完

  5. Evidence and analysis of 2012 Greenland records from spaceborne observations, a regional climate model and reanalysis data

    M. Tedesco


    Full Text Available A combined analysis of remote sensing observations, regional climate model (RCM outputs and reanalysis data over the Greenland ice sheet provides evidence that multiple records were set during summer 2012. Melt extent was the largest in the satellite era (extending up to ~ 97% of the ice sheet and melting lasted up to ~ two months longer than the 1979–2011 mean. Model results indicate that near surface temperature was ~ 3 standard deviations (σ above the 1958–2011 mean, while surface mass balance was ~ 3σ below the mean and runoff was 3.9σ above the mean over the same period. Albedo, exposure of bare ice and surface mass balance also set new records, as did the total mass balance with summer and annual mass changes of, respectively, −627 Gt and −574 Gt, 2σ below the 2003–2012 mean.

    We identify persistent anticyclonic conditions over Greenland associated with anomalies in the North Atlantic Oscillation (NAO, changes in surface conditions (e.g. albedo and pre-conditioning of surface properties from recent extreme melting as major driving mechanisms for the 2012 records. Because of self-amplifying positive feedbacks, less positive if not increasingly negative SMB will likely occur should large-scale atmospheric circulation and induced surface characteristics observed over the past decade persist. Since the general circulation models of the Coupled Model Intercomparison Project Phase 5 (CMIP5 do not simulate the abnormal anticyclonic circulation resulting from extremely negative NAO conditions as observed over recent years, contribution to sea level rise projected under different warming scenarios will be underestimated should the trend in NAO summer values continue.

  6. "Can You Make "Historiography" Sound More Friendly?": Towards the Construction of a Reliable and Validated History Teaching Observation Instrument

    van Hover, Stephanie; Hicks, David; Cotton, Stephen


    While the field of history education elucidates a clear and ambitious vision of high-quality history instruction, a current challenge for history educators (including teacher educators, curriculum specialists, and school-based history and social science supervisors) becomes how to illuminate and capture this when observing classrooms to research…

  7. New Interview and Observation Measures of the Broader Autism Phenotype : Description of Strategy and Reliability Findings for the Interview Measures

    Parr, Jeremy R.; De Jonge, Maretha V.; Wallace, Simon; Pickles, Andrew; Rutter, Michael L.; Le Couteur, Ann S.; van Engeland, Herman; Wittemeyer, Kerstin; Mcconachie, Helen; Roge, Bernadette; Mantoulan, Carine; Pedersen, Lennart; Isager, Torben; Poustka, Fritz; Bolte, Sven; Bolton, Patrick; Weisblatt, Emma; Green, Jonathan; Papanikolaou, Katerina; Baird, Gillian; Bailey, Anthony J.


    Clinical genetic studies confirm the broader autism phenotype (BAP) in some relatives of individuals with autism, but there are few standardized assessment measures. We developed three BAP measures (informant interview, self-report interview, and impression of interviewee observational scale) and de

  8. Validity and reliability of a tool for determining appropriateness of days of stay: an observational study in the orthopedic intensive rehabilitation facilities in Italy.

    Aida Bianco

    Full Text Available OBJECTIVES: To test the validity and reliability of a tool specifically developed for the evaluation of appropriateness in rehabilitation facilities and to assess the prevalence of appropriateness of the days of stay. METHODS: The tool underwent a process of cross-cultural translation, content validity, and test-retest validity. Two hospital-based rehabilitation wards providing intensive rehabilitation care located in the Region of Calabria, Southern Italy, were randomly selected. A review of medical records on a random sample of patients aged 18 or more was performed. RESULTS: The process of validation resulted in modifying some of the criteria used for the evaluation of appropriateness. Test-retest reliability showed that the agreement and the k statistic for the assessment of the appropriateness of days of stay were 93.4% and 0.82, respectively. A total of 371 patient days was reviewed, and 22.9% of the days of stay in the sample were judged to be inappropriate. The most frequently selected appropriateness criterion was the evaluation of patients by rehabilitation professionals for at least 3 hours on the index day (40.8%; moreover, the most frequent primary reason accounting for the inappropriate days of stay was social and/or family environment issues (34.1%. CONCLUSIONS: The findings showed that the tool used is reliable and have adequate validity to measure the extent of appropriateness of days of stay in rehabilitation facilities and that the prevalence of inappropriateness is contained in the investigated settings. Further research is needed to expand appropriateness evaluation to other rehabilitation settings, and to investigate more thoroughly internal and external causes of inappropriate use of rehabilitation services.

  9. Creation of Spatially and Temporally Consistent Discharge Records in Global Basins through the Assimilation of SWOT Observations

    Fisher, Colby; Pan, Ming; Wood, Eric


    The Surface Water and Ocean Topography (SWOT) mission is designed to provide global estimates of water surface elevation, slope and discharge from space. This mission will provide increased spatial and temporal coverage compared to current altimeters. However, the temporal sampling is less frequent than current in-situ discharge observations. Thus, there is a need for methods that can utilize spatially and temporally inconsistent observations of discharge to reconstruct fields that are consistent in time and space. Using the Inverse Streamflow Routing (ISR) model of Pan and Wood [2013], discharge records are derived for a set of large river basins using data assimilation with a fixed interval Kalman smoother. ISR utilizes observed (or future SWOT retrieved) discharge values at discrete (gauge) locations to generate spatially and temporally distributed fields of runoff by inverting a linear routing model. These runoff fields are then routed to produce river discharge estimates throughout the basin. Previous work has shown that the ISR assimilation method can be used to effectively reproduce the spatial and temporal dynamics of discharge within the Ohio River basin: however, this performance was strongly impacted by the spatial and temporal availability of discharge observations (particularly for the case of assimilating theoretical SWOT observations.) In this study, we further investigate the sensitivity of the ISR model to the data availability by applying it to a number of other other basins with different geometries and crossing patterns for the future SWOT orbit. For each basin, three synthetic experiments have been carried out to evaluate assimilating future SWOT observations. The experiments are: (1) assimilating in-situ gauges only, (2) using in-situ gauges and SWOT-retrieved "gauges", and (3) using SWOT-retrieved "gauges" only. Results show that that the model performance varies significantly when using temporally and spatially sparse data, which will be

  10. The Individualized Classroom Assessment Scoring System (inCLASS): Preliminary Reliability and Validity of a System for Observing Preschoolers' Competence in Classroom Interactions.

    Downer, Jason T; Booren, Leslie M; Lima, Olivia K; Luckner, Amy E; Pianta, Robert C


    This paper introduces the Individualized Classroom Assessment Scoring System (inCLASS), an observation tool that targets children's interactions in preschool classrooms with teachers, peers, and tasks. In particular, initial evidence is reported of the extent to which the inCLASS meets the following psychometric criteria: inter-rater reliability, normal distributions and adequate range, construct validity, and criterion-related validity. These initial findings suggest that the inCLASS has the potential to provide an authentic, contextualized assessment of young children's classroom behaviors. Future directions for research with the inCLASS are discussed.

  11. Temporal trends in West Antarctic surface mass balance: do large scale modes of climate contribute to observed records?

    Carpenter, M.; Rupper, S.; Williams, J.; Burgener, L. K.; Koenig, L.; Forster, R. R.; Koutnik, M. R.; Skinner, R.; Miege, C.; Brucker, L.


    Western Antarctica has been warming significantly at a rate of 0.17× 0.06 degrees C per decade from 1957 to 2006, with the strongest warming in the winter and spring months. Annual accumulation rates in the central WAIS have been decreasing over the same time period, in spite of rising temperatures. This is somewhat unexpected, as saturation vapor pressure increases with increasing temperature. One possible explanation of this observation could be related to synoptic-scale modes of climate, principally the Southern Annular Mode (SAM) and the El Nino Southern Oscillation (ENSO). These modes of climate are known to modify the track and strength of storms seasonally, but the true extent of the influence of these modes on accumulation in central WAIS is not well known. This is due, in part, to sparse instrumental weather data which makes it difficult to understand the spatial and temporal variability of the central WAIS Surface Mass Balance (SMB). Firn cores provide an excellent temporal SMB record that can fill this data gap, but are spatially limited. The spatial limitation of individual cores can be remedied by creating a network of firn cores over a region, which overcomes small scale variability and provides a regional representation of SMB over the temporal length of the firn core records. The 2011 Satellite Era Accumulation Traverse (SEAT) adds nine new firn cores (20 m deep, spanning 2010-1981) to existing cores within the same region of the central WAIS to improve the spatial network of regional SMB measurements. SMB is reconstructed from the firn cores, and are compared to simulated accumulation from five climate models and reanalyses datasets. The combination of firn cores and simulated records are used to investigate wether SAM and ENSO significantly influence SMB in the central WAIS. The new suite of cores show a statistically significant negative trend in accumulation during the past three decades, which is consistent with results from the previous cores

  12. Reliability and Reproducibility of Advanced ECG Parameters in Month-to-Month and Year-to-Year Recordings in Healthy Subjects

    Starc, Vito; Abughazaleh, Ahmed S.; Schlegel, Todd T.


    Advanced resting ECG parameters such the spatial mean QRS-T angle and the QT variability index (QTVI) have important diagnostic and prognostic utility, but their reliability and reproducibility (R&R) are not well characterized. We hypothesized that the spatial QRS-T angle would have relatively higher R&R than parameters such as QTVI that are more responsive to transient changes in the autonomic nervous system. The R&R of several conventional and advanced ECG para-meters were studied via intraclass correlation coefficients (ICCs) and coefficients of variation (CVs) in: (1) 15 supine healthy subjects from month-to-month; (2) 27 supine healthy subjects from year-to-year; and (3) 25 subjects after transition from the supine to the seated posture. As hypothesized, for the spatial mean QRS-T angle and many conventional ECG parameters, ICCs we-re higher, and CVs lower than QTVI, suggesting that the former parameters are more reliable and reproducible.

  13. Assessment of the welfare of dairy cattle using animal-based measurements: direct observations and investigation of farm records.

    Whay, H R; Main, D C J; Green, L E; Webster, A J F


    A protocol was developed by consultation with experts on the welfare of cattle to use direct observations of cattle and an examination of farm records to assess welfare. Fifty-three dairy farms in England were visited and assessed during the winter of 2000/01. The findings were compiled and the results of the welfare measurements were examined by 50 experts who indicated at what level they considered that improvement was required. More than 75 per cent of them considered that 32 of the 53 farms needed to take action to reduce the incidence of mastitis, and that at least 42 of the farms needed to take action to reduce the prevalence of lameness, overgrown claws, swollen and ulcerated hocks, and injuries from the environment.

  14. [Reliability of the results of the ultrasonic hemodynamic recording (Doppler effect) in the diagnosis of cerebral ishemic ischemia of carotid origin].

    Perrin, G; Goutelle, A; Pierluca, P; Chacornac, R; Allegre, G E


    The Doppler ultrasound diagnosis of carotid artery stenosis (asymetrical systolic and diastolic flows; elevated resistance index: ratio of flow pulse amplitude to systolic and diastolic values; flow reversal in the ophtalmic artery) is compared, in 52 patients, to the clinical, angiographic (40 patients) an surgical findings and to the peroperative measure of intra-arterial pressure and flow (30 patients). Its reliability is proved as a guide for angiographic exploration and for postoperative watching, but it is restricted to great vessels (cervical carotid artery) and is unable to detect ulcerated plate without stenosis.

  15. Seismogenic Coupling at Convergent Margins - Geophysical Observations from the South American Subduction Zone and the Alpine Rock Record

    Oncken, O.


    Convergent continental margins are the Earth's principal locus of important earthquake hazards with nearly all interplate megathrust earthquakes (M>8) in the seismogenic coupling zone between the converging plates. Despite the key importance of this zone, the processes that shape it are poorly understood. This is underscored by a number of novel observations attributed to processes in the interface zone that are attracting increasing attention: silent slip events, non-volcanic tremors, afterslip, locked patches embedded in a creeping environment, etc. We here compare the rock record from a field study with recent results from two major geophysical experiments (ANCORP and TIPTEQ) that have imaged the South Chilean subduction zone at the site of the largest historically recorded earthquake (Valdivia, 1969; Mw = 9.5) and the plate boundary in Northern Chile, where a major seismic event is expected in the near future (Iquique segment). The reflection seismic data exhibit well defined changes of reflectivity and Vp/Vs ratio along the plate interface that can be correlated with different parts of the coupling zone as well as with changes during the seismic cycle. Observations suggest an important role of the hydraulic system. The rock record from the exhumed Early Tertiary seismogenic coupling zone of the European Alps provides indications for the mechanisms and processes responsible for the geophysical images. Fabric formation and metamorphism in a largely preserved subduction channel chiefly record the deformation conditions of the pre-collisional setting along the plate interface. We identify an unstable slip domain from pseudotachylytes occurring in the temperature range between 200-300°C. This zone coincides with a domain of intense veining in the subduction mélange with mineral growth into open cavities, indicating fast, possibly seismic, rupture. Evidence for transient near-lithostatic fluid pressure as well as brittle fractures competing with mylonitic shear

  16. MEMS reliability

    Hartzell, Allyson L; Shea, Herbert R


    This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.

  17. Development and reliability of the explicit professional oral communication observation tool to quantify the use of non-technical skills in healthcare.

    Kemper, Peter F; van Noord, Inge; de Bruijne, Martine; Knol, Dirk L; Wagner, Cordula; van Dyck, Cathy


    A lack of non-technical skills is increasingly recognised as an important underlying cause of adverse events in healthcare. The nature and number of things professionals communicate to each other can be perceived as a product of their use of non-technical skills. This paper describes the development and reliability of an instrument to measure and quantify the use of non-technical skills by direct observations of explicit professional oral communication (EPOC) in the clinical situation. In an iterative process we translated, tested and refined an existing checklist from the aviation industry, called self, human interaction, aircraft, procedures and environment, in the context of healthcare, notably emergency departments (ED) and intensive care units (ICU). The EPOC comprises six dimensions: assertiveness, working with others; task-oriented leadership; people-oriented leadership; situational awareness; planning and anticipation. Each dimension is specified into several concrete items reflecting verbal behaviours. The EPOC was evaluated in four ED and six ICU. In the ED and ICU, respectively, 378 and 1144 individual and 51 and 68 contemporaneous observations of individual staff members were conducted. All EPOC dimensions occur frequently, apart from assertiveness, which was hardly observed. Intraclass correlations for the overall EPOC score ranged between 0.85 and 0.91 and for underlying EPOC dimensions between 0.53 and 0.95. The EPOC is a new instrument for evaluating the use of non-technical skills in healthcare, which is reliable in two highly different settings. By quantifying professional behaviour the instrument facilitates measurement of behavioural change over time. The results suggest that EPOC can also be translated to other settings.

  18. Aura Microwave Limb Sounder Observations of Dynamics and Transport During the Record-Breaking 2009 Arctic Stratospheric Major Warming

    Manney, Gloria L.; Schwartz, Michael J.; Krueger, Kirstin; Santee, Michelle L.; Pawson, Steven; Lee, Jae N.; Daffer, William H.; Fuller, Ryan A.; Livesey, Nathaniel J.


    A major stratospheric sudden warming (SSW) in January 2009 was the strongest and most prolonged on record. Aura Microwave Limb Sounder (MLS) observations are used to provide an overview of dynamics and transport during the 2009 SSW, and to compare with the intense, long-lasting SSW in January 2006. The Arctic polar vortex split during the 2009 SSW, whereas the 2006 SSW was a vortex displacement event. Winds reversed to easterly more rapidly and reverted to westerly more slowly in 2009 than in 2006. More mixing of trace gases out of the vortex during the decay of the vortex fragments, and less before the fulfillment of major SSW criteria, was seen in 2009 than in 2006; persistent well-defined fragments of vortex and anticyclone air were more prevalent in 2009. The 2009 SSW had a more profound impact on the lower stratosphere than any previously observed SSW, with no significant recovery of the vortex in that region. The stratopause breakdown and subsequent reformation at very high altitude, accompanied by enhanced descent into a rapidly strengthening upper stratospheric vortex, were similar in 2009 and 2006. Many differences between 2006 and 2009 appear to be related to the different character of the SSWs in the two years.

  19. Observations of Near-Surface Scattering with a Dense Profile of Shots Recorded by an Underground Array

    Pavlis, G. L.; Atterholt, J.; Bowden, D. C.; Caton, R.; Gribler, G.; Liberty, L. M.; Mandic, V.; Meyers, P.; Prestegard, T.; Tsai, V. C.


    We operated a combined passive-active array experiment at the Sanford Underground Research Facility (SURF) in the Black Hills of South Dakota. SURF is located at the former Homestake Mine, which is the deepest underground mine in North America. The passive array has 24 broadband stations with 9 surface stations and 15 underground sites deployed in mine drifts to depths up to 1478 m. We conducted a series of three active-source experiments: (1) a land streamer and weight drop system produced over 4300 source points at 4 m intervals on 5 profiles in and near the city of Lead, SD; (2) a set of nine-component surveys conducted at the 1250 and 1478 m levels of the mine; and (3) an underground land streamer survey on the 518 m level. The surface shot data was well recorded whenever the shot point was within approximately 1500 m of most receivers. The results were compromised by a failure of a precision, absolute timing system we assembled for recording shot times. We developed a two-stage, multichannel cross-correlation method to graphically edit and provide precise timing for these shots that assumes the travel time to each receiver was constant within a specified averaging distance. The resulting redundancy was exploited in a least squares inversion similar to surface consistent static estimations. The data show compelling evidence for extreme variation in source coupling. We observe laterally continuous areas where the dominant direct wave signal recorded underground shifts between P and S propagation modes on scale lengths of the order of 100 m. We hypothesis that the along line variations are an indication of near-surface scattering created by a combination of heterogeneity in the weathered layer and human generated heterogeneity from mining. The land streamer data provide estimates of near-surface of Vp from first break picks and Vs from a joint inversion of Rayleigh wave dispersion curves and H/V ratios. For this meeting we expect to combine the data from the near

  20. How Reliable Is Structure from Motion (SfM over Time and between Observers? A Case Study Using Coral Reef Bommies

    Vincent Raoult


    Full Text Available Recent efforts to monitor the health of coral reefs have highlighted the benefits of using structure from motion-based assessments, and despite increasing use of this technique in ecology and geomorphology, no study has attempted to quantify the precision of this technique over time and across different observers. This study determined whether 3D models of an ecologically relevant reef structure, the coral bommie, could be constructed using structure from motion and be reliably used to measure bommie volume and surface area between different observers and over time. We also determined whether the number of images used to construct a model had an impact on the final measurements. Three dimensional models were constructed of over twenty coral bommies from Heron Island, a coral cay at the southern end of the Great Barrier Reef. This study did not detect any significant observer effect, and there were no significant differences in measurements over four sampling days. The mean measurement error across all bommies and between observers was 15 ± 2% for volume measurements and 12 ± 1% for surface area measurements. There was no relationship between the number of pictures taken for a reconstruction and the measurements from that model, however, more photographs were necessary to be able to reconstruct complete coral bommies larger than 1 m3. These results suggest that structure from motion is a viable tool for ongoing monitoring of ecologically-significant coral reefs, especially to establish effects of disturbances, provided the measurement error is considered.

  1. Software reliability

    Bendell, A


    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  2. Arctic warming, moisture increase and circulation changes observed in the Ny-Ålesund homogenized radiosonde record

    Maturilli, Marion; Kayser, Markus


    Compared to global warming, the feedback mechanisms of Arctic Amplification lead to an increase of surface temperature in the Arctic by a factor of two. Yet, the vertical structure of Arctic warming and its resulting radiative feedbacks are poorly understood. Here, we focus on the analysis of the atmospheric column above Ny-Ålesund (78.9° N, 11.9° E), Svalbard. At Ny-Ålesund, radiosondes have been launched on a daily basis since 1993 in support of synoptic observations. The obtained radiosonde measurements 1993 to 2014 have been homogenized accounting for instrumentation discontinuities and known errors in the manufacturer provided profiles. From the homogenized data record, a first upper-air climatology of wind, humidity and temperature above Ny-Ålesund is presented, forming the background for the analysis of changes detected during the 22-year period. Particularly during the winter season, a strong increase in atmospheric humidity and temperature is observed, with a significant warming of the free troposphere in January and February of up to 3 K per decade. This winter warming is even more pronounced in the boundary layer below 1 km, presumably amplified by local conditions including e.g. orographic effects or the boundary layer capping inversion. Also the largest contribution to the increasing atmospheric water vapour column originates from the lowermost 2 km of the atmosphere where specific humidity inversions are frequently observed. Yet, no increase in the water vapour contribution by humidity inversions is detected. Instead, we find an increase in the humidity content of the large scale background humidity profiles to be responsible for the observed increase in winter integrated water vapour. The observed difference in the frequency occurrence of wind directions in the free troposphere between the first and second half of the 22-year period implies that the large scale synoptic flow over Svalbard has changed over the years. During the winter season, the

  3. Sunspot numbers based on historic records in the 1610s: Early telescopic observations by Simon Marius and others

    Neuhäuser, R.; Neuhäuser, D. L.


    , Tanner, Perovius, Argoli, and Wely are not mentioned as observers for 1611, 1612, 1618, 1620, and 1621 in Hoyt & Schatten. Marius and Schmidnerus are among the earliest datable telescopic sunspot observers (1611 Aug 3, Julian), namely after Harriot, the two Fabricius (father and son), Scheiner, and Cysat. Sunspots records by Malapert from 1618 to 1621 show that the last low-latitude spot was seen in Dec 1620, while the first high-latitude spots were noticed in June and Oct 1620, so that the Schwabe cycle turnover (minimum) took place around that time, which is also consistent with the sunspot trend mentioned by Marius and with naked-eye spots and likely true aurorae. We consider discrepancies in the Hoyt & Schatten (1998) systematics, we compile the active day fractions for the 1610s, and we critically discuss very recent publications on Marius which include the following Maunder Minimum. Our work should be seen as a call to go back to the historical sources.

  4. A large web-based observer reliability study of early ischaemic signs on computed tomography. The Acute Cerebral CT Evaluation of Stroke Study (ACCESS.

    Joanna M Wardlaw

    Full Text Available BACKGROUND: Early signs of ischaemic stroke on computerised tomography (CT scanning are subtle but CT is the most widely available diagnostic test for stroke. Scoring methods that code for the extent of brain ischaemia may improve stroke diagnosis and quantification of the impact of ischaemia. METHODOLOGY AND PRINCIPAL FINDINGS: We showed CT scans from patients with acute ischaemic stroke (n = 32, with different patient characteristics and ischaemia signs to doctors in stroke-related specialties world-wide over the web. CT scans were shown twice, randomly and blindly. Observers entered their scan readings, including early ischaemic signs by three scoring methods, into the web database. We compared observers' scorings to a reference standard neuroradiologist using area under receiver operator characteristic curve (AUC analysis, Cronbach's alpha and logistic regression to determine the effect of scales, patient, scan and observer variables on detection of early ischaemic changes. Amongst 258 readers representing 33 nationalities and six specialties, the AUCs comparing readers with the reference standard detection of ischaemic signs were similar for all scales and both occasions. Being a neuroradiologist, slower scan reading, more pronounced ischaemic signs and later time to CT all improved detection of early ischaemic signs and agreement on the rating scales. Scan quality, stroke severity and number of years of training did not affect agreement. CONCLUSIONS: Large-scale observer reliability studies are possible using web-based tools and inform routine practice. Slower scan reading and use of CT infarct rating scales improve detection of acute ischaemic signs and should be encouraged to improve stroke diagnosis.

  5. A Large Web-Based Observer Reliability Study of Early Ischaemic Signs on Computed Tomography. The Acute Cerebral CT Evaluation of Stroke Study (ACCESS)

    Wardlaw, Joanna M.; von Kummer, Rüdiger; Farrall, Andrew J.; Chappell, Francesca M.; Hill, Michael; Perry, David


    Background Early signs of ischaemic stroke on computerised tomography (CT) scanning are subtle but CT is the most widely available diagnostic test for stroke. Scoring methods that code for the extent of brain ischaemia may improve stroke diagnosis and quantification of the impact of ischaemia. Methodology and Principal Findings We showed CT scans from patients with acute ischaemic stroke (n = 32, with different patient characteristics and ischaemia signs) to doctors in stroke-related specialties world-wide over the web. CT scans were shown twice, randomly and blindly. Observers entered their scan readings, including early ischaemic signs by three scoring methods, into the web database. We compared observers' scorings to a reference standard neuroradiologist using area under receiver operator characteristic curve (AUC) analysis, Cronbach's alpha and logistic regression to determine the effect of scales, patient, scan and observer variables on detection of early ischaemic changes. Amongst 258 readers representing 33 nationalities and six specialties, the AUCs comparing readers with the reference standard detection of ischaemic signs were similar for all scales and both occasions. Being a neuroradiologist, slower scan reading, more pronounced ischaemic signs and later time to CT all improved detection of early ischaemic signs and agreement on the rating scales. Scan quality, stroke severity and number of years of training did not affect agreement. Conclusions Large-scale observer reliability studies are possible using web-based tools and inform routine practice. Slower scan reading and use of CT infarct rating scales improve detection of acute ischaemic signs and should be encouraged to improve stroke diagnosis. PMID:21209901

  6. Records via probability theory

    Ahsanullah, Mohammad


    A lot of statisticians, actuarial mathematicians, reliability engineers, meteorologists, hydrologists, economists. Business and sport analysts deal with records which play important roles in various fields of statistics and its application. This book enables a reader to check his/her level of understanding of the theory of record values. We give basic formulae which are more important in the theory and present a lot of examples which illustrate the theoretical statements. For a beginner in record statistics, as well as for graduate students the study of our book needs the basic knowledge of the subject. A more advanced reader can use our book to polish his/her knowledge. An upgraded list of bibliography which will help a reader to enrich his/her theoretical knowledge and widen the experience of dealing with ordered observations, is also given in the book.

  7. A ten-year global record of absorbing aerosols above clouds from OMI's near-UV observations

    Jethva, Hiren; Torrres, Omar; Ahn, Changwoo


    Aerosol-cloud interaction continues to be one of the leading uncertain components of climate models, primarily due to the lack of an adequate knowledge of the complex microphysical and radiative processes associated with the aerosolcloud system. The situations when aerosols and clouds are found in the same atmospheric column, for instance, when light-absorbing aerosols such as biomass burning generated carbonaceous particles or wind-blown dust overlay low-level cloud decks, are commonly found over several regional of the world. Contrary to the cloud-free scenario over dark surface, for which aerosols are known to produce a net cooling effect (negative radiative forcing) on climate, the overlapping situation of absorbing aerosols over cloud can potentially exert a significant level of atmospheric absorption and produces a positive radiative forcing at top-of-atmosphere. The magnitude of direct radiative effects of aerosols above cloud depends directly on the aerosol loading, microphysical-optical properties of the aerosol layer and the underlying cloud deck, and geometric cloud fraction. We help in addressing this problem by introducing a novel product of optical depth of absorbing aerosols above clouds retrieved from near-UV observations made by the Ozone Monitoring Instrument (OMI) on board NASA's Aura platform. The presence of absorbing aerosols above cloud reduces the upwelling radiation reflected by cloud and produces a strong `color ratio' effect in the near-UV region, which can be unambiguously detected in the OMI measurements. Physically based on this effect, the OMACA algorithm retrieves the optical depths of aerosols and clouds simultaneously under a prescribed state of atmosphere. The algorithm architecture and results from a ten-year global record including global climatology of frequency of occurrence and above-cloud aerosol optical depth, and a discussion on related future field campaigns are presented.

  8. Preservation of benthic foraminifera and reliability of deep-sea temperature records: Importance of sedimentation rates, lithology, and the need to examine test wall structure

    Sexton, Philip F.; Wilson, Paul A.


    Preservation of planktic foraminiferal calcite has received widespread attention in recent years, but the taphonomy of benthic foraminiferal calcite and its influence on the deep-sea palaeotemperature record have gone comparatively unreported. Numerical modeling indicates that the carbonate recrystallization histories of deep-sea sections are dominated by events in their early burial history, meaning that the degree of exchange between sediments and pore fluids during the early postburial phase holds the key to determining the palaeotemperature significance of diagenetic alteration of benthic foraminifera. Postburial sedimentation rate and lithology are likely to be important determinants of the paleoceanographic significance of this sediment-pore fluid interaction. Here we report an investigation of the impact of extreme change in sedimentation rate (a prolonged and widespread Upper Cretaceous hiatus in the North Atlantic Ocean) on the preservation and δ18O of benthic foraminifera of Middle Cretaceous age (nannofossil zone NC10, uppermost Albian/lowermost Cenomanian, ˜99 Ma ago) from multiple drill sites. At sites where this hiatus immediately overlies NC10, benthic foraminifera appear to display at least moderate preservation of the whole test. However, on closer inspection, these tests are shown to be extremely poorly preserved internally and yield δ18O values substantially higher than those from contemporaneous better preserved benthic foraminifera at sites without an immediately overlying hiatus. These high δ18O values are interpreted to indicate alteration close to the seafloor in cooler waters during the Late Cretaceous hiatus. Intersite differences in lithology modulate the diagenetic impact of this extreme change in sedimentation rate. Our results highlight the importance of thorough examination of benthic foraminiferal wall structures and lend support to the view that sedimentation rate and lithology are key factors controlling the paleoceanographic

  9. Reliability Engineering

    Lazzaroni, Massimo


    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  10. Computing continuous record of discharge with quantified uncertainty using index velocity observations: A probabilistic machine learning approach

    Farahmand, Touraj; Hamilton, Stuart


    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs). In general, the index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate and recommended when more than one specific discharge can be measured for a specific stage such as backwater and unsteady flow conditions caused by but not limited to the following; stream confluences, streams flowing into lakes or reservoirs, tide-affected streams, regulated streamflows (dams or control structures), or streams affected by meteorological forcing, such as strong prevailing winds. In existing index velocity modeling techniques, two models (ratings) are required; index velocity model and stage-area model. The outputs from each of these models, mean channel velocity (Vm) and cross-sectional area (A), are then multiplied together to compute a discharge. Mean channel velocity (Vm) can generally be determined by a multivariate regression parametric model such as linear regression in the simplest case. The main challenges in the existing index velocity modeling techniques are; 1) Preprocessing and QA/QC of continuous index velocity data and synchronizing them with discharge measurements. 2) Nonlinear relationship between mean velocity and index velocity which is not uncommon at monitoring locations. 3)Model exploration and analysis in order to find the optimal regression model predictor(s) and model type (linear vs nonlinear and if nonlinear number of the parameters). 3) Model changes caused by dynamical changes in the environment (geomorphic, biological) over time 5) Deployment of the final model into the Data Management Systems (DMS) for real-time discharge calculation 6) Objective estimation of uncertainty caused by: field measurement errors; structural uncertainty; parameter uncertainty; and continuous sensor data

  11. Mesozoic Coleopteran Faunas from Argentina: Geological Context, Diversity, Taphonomic Observations, and Comparison with Other Fossil Insect Records

    María Belén Lara


    Full Text Available The order Coleoptera is the most diversified group of the Class Insecta and is the largest group of the Animal Kingdom. This contribution reviews the Mesozoic insects and especially the coleopteran records from Argentina, based on bibliographical and unpublished materials (86 described species, 526 collected specimens. The material came from different geological units from the late Middle Triassic to the Late Triassic (Bermejo, Cuyo, and Malargüe basins to the Middle-Late Jurassic and Early Cretaceous (Deseado Massif, Cañadón Asfalto, and San Luís Basin. The coleopteran record is composed of 29 described species with 262 collected specimens (isolated elytra mainly represented by Triassic species and only four specimens recorded in Jurassic units, all of them currently unpublished. These fossil coleopterans provide fundamental information about the evolution of insects in the Southern Hemisphere and confirm the Triassic Argentinean insect deposits to be among the most important in the world.

  12. Inter- and intra-specific diurnal habitat selection of zooplankton during the spring bloom observed by Video Plankton Recorder

    Sainmont, Julie; Gislason, Astthor; Heuschele, Jan


    Recorder (VPR), a tool that allows mapping of vertical zooplankton distributions with a far greater spatial resolution than conventional zooplankton nets. The study took place over a full day–night cycle in Disko Bay, Greenland, during the peak of the phytoplankton spring bloom. The sampling revealed...

  13. Usage of documented pre-hospital observations in secondary care: a questionnaire study and retrospective comparison of records.

    Knutsen, Geir O; Fredriksen, Knut


    The patient handover is important for the safe transition from the pre-hospital setting to secondary care. The loss of critical information about the pre-hospital phase may impact upon the clinical course of the patient. University Hospital Emergency Care registrars answered a questionnaire about how they perceive clinical documentation from the ambulance services. We also reviewed patient records retrospectively, to investigate to what extent eight selected parameters were transferred correctly to hospital records by clinicians. Only parameters outside the normal range were selected. The registrars preferred a verbal handover with hand-written pre-hospital reports as the combined source of clinical information. Scanned report forms were infrequently used. Information from other doctors was perceived as more important than the information from ambulance crews. Less than half of the selected parameters in pre-hospital notes were transferred to hospital records, even for parameters regarded as important by the registrars. Abnormal vital signs were not transferred as often as mechanism of injury, medication administered and immobilisation of trauma patients. Data on pre-hospital abnormal vital signs are frequently not transferred to the hospital admission notes. This information loss may lead to suboptimal care.

  14. Socially-induced placebo analgesia: a comparison of a pre-recorded versus live face-to-face observation

    Hunter, T.; Siess, F.; Colloca, L.


    Background Recently, it has been shown that live, face-to-face social observation induces marked placebo analgesia. Despite the phenomenal growth of video sharing platforms, the potential analgesic effects of video based social observation are largely unknown. This study compared video based and live social observation induced placebo analgesia and whether there was a similar relationship between analgesic responses and empathy traits for both conditions. Methods Here we compared placebo analgesia in four groups: social observation through a video (SOV Group), social observation in person (SOP Group), verbal suggestion alone (VS Group) and a natural history group (NH Group). The SOV and SOP groups underwent a placebo treatment and painful stimuli following respectively a video based and live observation of a demonstrator showing analgesic effects when the painful stimuli were paired to a green light but not a red light. The VS group received painful stimuli after they had been verbally instructed to expect less pain after the green light. The NH group received painful stimuli, but was told nothing about the meaning of the lights. Individual pain reports and empathy traits were measured. Results We found that video based observation induced substantial placebo analgesic responses that were of similar magnitude to live observation. Notably, the analgesic scores were strongly correlated with empathetic concern in the live observation group but not in the video replay group. Conclusions These findings add evidence that placebo analgesia can be induced by social observation and that empathy interacts with these effects in a context-dependent manner. PMID:24347563

  15. Observation

    Patell, Hilla


    In order to achieve the goal of observation, preparation of the adult, the observer, is necessary. This preparation, says Hilla Patell, requires us to "have an appreciation of the significance of the child's spontaneous activities and a more thorough understanding of the child's needs." She discusses the growth of both the desire to…

  16. Observation

    Kripalani, Lakshmi A.


    The adult who is inexperienced in the art of observation may, even with the best intentions, react to a child's behavior in a way that hinders instead of helping the child's development. Kripalani outlines the need for training and practice in observation in order to "understand the needs of the children understand how to remove…

  17. Socially induced placebo analgesia: a comparison of a pre-recorded versus live face-to-face observation.

    Hunter, T; Siess, F; Colloca, L


    Recently, it has been shown that live, face-to-face social observation induces marked placebo analgesia. Despite the phenomenal growth of video sharing platforms, the potential analgesic effects of video-based social observation are largely unknown. This study compared video-based and live social observation induced placebo analgesia and whether there was a similar relationship between analgesic responses and empathy traits for both conditions. Here, we compared placebo analgesia in four groups: social observation through a video (SOV group), social observation in person (SOP group), verbal suggestion alone (VS group) and a natural history group (NH group). The SOV and SOP groups underwent a placebo treatment and painful stimuli following respectively a video-based and live observation of a demonstrator showing analgesic effects when the painful stimuli were paired to a green light but not a red light. The VS group received painful stimuli after they had been verbally instructed to expect less pain after the green light. The NH group received painful stimuli, but was told nothing about the meaning of the lights. Individual pain reports and empathy traits were measured. We found that video-based observation induced substantial placebo analgesic responses that were of similar magnitude to live observation. Notably, the analgesic scores were strongly correlated with empathetic concern in the live observation group but not in the video replay group. These findings add evidence that placebo analgesia can be induced by social observation and that empathy interacts with these effects in a context-dependent manner. © 2013 European Pain Federation - EFIC®

  18. Microelectronics Reliability


    convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. This report was cleared for public release...testing for reliability prediction of devices exhibiting multiple failure mechanisms. Also presented was an integrated accelerating and measuring ...13  Table 2  T, V, F and matrix versus  measured  FIT

  19. Assessment of upper-limb capacity, performance, and developmental disregard in children with cerebral palsy: validity and reliability of the revised Video-Observation Aarts and Aarts module: Determine Developmental Disregard (VOAA-DDD-R)

    Houwink, A.; Geerdink, Y.A.; Steenbergen, B.; Geurts, A.C.H.; Aarts, P.B.M.


    AIM: To investigate the validity and reliability of the revised Video-Observation Aarts and Aarts module: Determine Developmental Disregard (VOAA-DDD-R). METHOD: Upper-limb capacity and performance were assessed in children with unilateral spastic cerebral palsy (CP) by measuring overall duration of

  20. Assessment of upper-limb capacity, performance, and developmental disregard in children with cerebral palsy: validity and reliability of the revised Video-Observation Aarts and Aarts module: Determine Developmental Disregard (VOAA-DDD-R)

    Houwink, A.; Geerdink, Y.A.; Steenbergen, B.; Geurts, A.C.H.; Aarts, P.B.M.


    Aim To investigate the validity and reliability of the revised Video-Observation Aarts and Aarts module: Determine Developmental Disregard (VOAA-DDD-R). Method Upper-limb capacity and performance were assessed in children with unilateral spastic cerebral palsy (CP) by measuring overall duration of a

  1. The thermodynamic state of the Arctic atmosphere observed by AIRS: comparisons during the record minimum sea ice extents of 2007 and 2012

    Devasthale, A.; Sedlar, J.; T. Koenigk; E. J. Fetzer


    The record sea ice minimum (SIM) extents observed during the summers of 2007 and 2012 in the Arctic are stark evidence of accelerated sea ice loss during the last decade. Improving our understanding of the Arctic atmosphere and accurate quantification of its characteristics becomes ever more crucial, not least to improve predictions of such extreme events in the future. In this context, the Atmospheric Infrared Sounder (AIRS) instrument onboard NASA's Aqua satellite provides crucial insights ...

  2. The thermodynamic state of the Arctic atmosphere observed by AIRS: comparisons during the record minimum sea-ice extents of 2007 and 2012

    Devasthale, A.; T. Koenigk; Sedlar, J.; E. J. Fetzer


    The record sea-ice minimum (SIM) extents observed during the summers of 2007 and 2012 in the Arctic are stark evidence of accelerated sea ice loss during the last decade. Improving our understanding of the Arctic atmosphere and accurate quantification of its characteristics becomes ever more crucial, not least to improve predictions of such extreme events in the future. In this context, the Atmospheric Infrared Sounder (AIRS) instrument onboard NASA's Aqua satellite provides crucial insights ...

  3. Sunspot numbers based on historic records in the 1610s - early telescopic observations by Simon Marius and others

    Neuhaeuser, Ralph


    Hoyt & Schatten (1998) claim that Simon Marius would have observed the sun from 1617 Jun 7 to 1618 Dec 31 (Gregorian calendar) all days, except three short gaps in 1618, but would never have detected a sunspot -- based on a quotation from Marius in Wolf (1857), but misinterpreted by Hoyt & Schatten. Marius himself specified in early 1619 that "for one and a half year ... rather few or more often no spots could be detected ... which was never observed before" (Marius 1619). The generic statement by Marius can be interpreted such that the active day fraction was below 0.5 (but not zero) from fall 1617 to spring 1619 and that it was 1 before fall 1617 (since August 1611). Hoyt & Schatten cite Zinner (1952), who referred to Zinner (1942), where observing dates by Marius since 1611 are given, but which were not used by Hoyt & Schatten. We present all relevant texts from Marius where he clearly stated that he observed many spots in different form on and since 1611 Aug 3 (Julian) = Aug 13 (Greg.) (on...

  4. The cross-shore distribution of plankton and particles southwest of Iceland observed with a Video Plankton Recorder

    Gislason, Astthor; Logemann, Kai; Marteinsdottir, Gudrun


    The high resolution distribution of plankton and particles along a transect extending from the coast and across the shelf southwest of Iceland was studied in relation to hydrographic features and chlorophyll a fluorescence in late May 2010-2013 with a Video Plankton Recorder. The different groups of plankton and particles showed distinctive distributional pattern. Decaying organic matter (marine snow) was a very significant component of the system. Calanus finmarchicus stayed generally shallower than egg carrying Pseudocalanus spp. Diel variability in depth distribution of C. finmarchicus was not evident. Ctenophores, jellies and fish larvae were most abundant above ~50 m depth. Ctenophores were relatively abundant across the whole transect, while jellies and fish larvae were mainly seen on the landward half of the transect. The data on distribution of copepods (mainly C. finmarchicus) were combined with the results of a numerical circulation model (CODE), thus obtaining an estimate of fluxes of copepods in the area. The results show that C. finmarchicus may be transported by currents both eastwards and westwards along the south coast, while retention on the bank is also possible. Based on the results of the synthesis of the distributional data and the CODE model, it is hypothesized that the populations off the south coast are at least partly self-sustained in the region.

  5. Optimising the use of observational electronic health record data: Current issues, evolving opportunities, strategies and scope for collaboration.

    Liaw, Siaw-Teng; Powell-Davies, Gawaine; Pearce, Christopher; Britt, Helena; McGlynn, Lisa; Harris, Mark F


    With increasing computerisation in general practice, national primary care networks are mooted as sources of data for health services and population health research and planning. Existing data collection programs - MedicinesInsight, Improvement Foundation, Bettering the Evaluation and Care of Health (BEACH) - vary in purpose, governance, methodologies and tools. General practitioners (GPs) have significant roles as collectors, managers and users of electronic health record (EHR) data. They need to understand the challenges to their clinical and managerial roles and responsibilities. The aim of this article is to examine the primary and secondary use of EHR data, identify challenges, discuss solutions and explore directions. Representatives from existing programs, Medicare Locals, Local Health Districts and research networks held workshops on the scope, challenges and approaches to the quality and use of EHR data. Challenges included data quality, interoperability, fragmented governance, proprietary software, transparency, sustainability, competing ethical and privacy perspectives, and cognitive load on patients and clinicians. Proposed solutions included effective change management; transparent governance and management of intellectual property, data quality, security, ethical access, and privacy; common data models, metadata and tools; and patient/community engagement. Collaboration and common approaches to tools, platforms and governance are needed. Processes and structures must be transparent and acceptable to GPs.

  6. Keeping the Records Straight.

    Clift, Phil; Keynes, Milton


    Guidelines are given regarding keeping and using educational records for exceptional children in Great Britain. Procedures related to anecdotal records, observation inventories, and rating scales are delineated. (CL)

  7. The REporting of Studies Conducted Using Observational Routinely-Collected Health Data (RECORD Statement: Methods for Arriving at Consensus and Developing Reporting Guidelines.

    Stuart G Nicholls

    Full Text Available Routinely collected health data, collected for administrative and clinical purposes, without specific a priori research questions, are increasingly used for observational, comparative effectiveness, health services research, and clinical trials. The rapid evolution and availability of routinely collected data for research has brought to light specific issues not addressed by existing reporting guidelines. The aim of the present project was to determine the priorities of stakeholders in order to guide the development of the REporting of studies Conducted using Observational Routinely-collected health Data (RECORD statement.Two modified electronic Delphi surveys were sent to stakeholders. The first determined themes deemed important to include in the RECORD statement, and was analyzed using qualitative methods. The second determined quantitative prioritization of the themes based on categorization of manuscript headings. The surveys were followed by a meeting of RECORD working committee, and re-engagement with stakeholders via an online commentary period.The qualitative survey (76 responses of 123 surveys sent generated 10 overarching themes and 13 themes derived from existing STROBE categories. Highest-rated overall items for inclusion were: Disease/exposure identification algorithms; Characteristics of the population included in databases; and Characteristics of the data. In the quantitative survey (71 responses of 135 sent, the importance assigned to each of the compiled themes varied depending on the manuscript section to which they were assigned. Following the working committee meeting, online ranking by stakeholders provided feedback and resulted in revision of the final checklist.The RECORD statement incorporated the suggestions provided by a large, diverse group of stakeholders to create a reporting checklist specific to observational research using routinely collected health data. Our findings point to unique aspects of studies conducted

  8. Experienced versus Inexperienced Interexaminer Reliability on Location and Classification of Myofascial Trigger Point Palpation to Diagnose Lateral Epicondylalgia: An Observational Cross-Sectional Study

    Raquel Mora-Relucio


    Full Text Available The purpose was to evaluate the interexaminer reliability of experienced and inexperienced examiners on location and classification of myofascial trigger points (MTrPs in two epicondylar muscles and the association between the MTrP found and the diagnosis of lateral epicondylalgia (LE. Fifty-two pianists (some suffered LE voluntarily participated in the study. Three physiotherapists (one inexperienced in myofascial pain examined, located, and marked MTrPs in the extensor carpi radialis brevis (ECRB and extensor digitorum communis (EDC muscles. Forearms were photographed and analyzed to establish the degree of agreement on MTrPs diagnosis. Data showed 81.73% and 77.88% of agreement on MTrP classification and 85.58% and 72.12% on MTrP location between the expert evaluators for ECRB and EDC, respectively. The agreement on MTrP classification between experienced and inexperienced examiners was 54.81% and 51.92% for ECRB and 50.00% and 55.77% for EDC. Also, agreement on MTrP location was 54.81% and 60.58% for ECRB and 48.08% and 48.08% for EDC. A strong association was found between presence of relevant MTrPs, LE diagnosis, and forearm pain when the examiners were experts. The analysis of location and classification of MTrPs in the epicondylar muscles through physical examination by experienced evaluators is reliable, reproducible, and suitable for diagnosing LE.

  9. Quantifying the Observability of CO2 Flux Uncertainty in Atmospheric CO2 Records Using Products from Nasa's Carbon Monitoring Flux Pilot Project

    Ott, Lesley; Pawson, Steven; Collatz, Jim; Watson, Gregg; Menemenlis, Dimitris; Brix, Holger; Rousseaux, Cecile; Bowman, Kevin; Bowman, Kevin; Liu, Junjie; Eldering, Annmarie; Gunson, Michael; Kawa, Stephan R.


    NASAs Carbon Monitoring System (CMS) Flux Pilot Project (FPP) was designed to better understand contemporary carbon fluxes by bringing together state-of-the art models with remote sensing datasets. Here we report on simulations using NASAs Goddard Earth Observing System Model, version 5 (GEOS-5) which was used to evaluate the consistency of two different sets of observationally constrained land and ocean fluxes with atmospheric CO2 records. Despite the strong data constraint, the average difference in annual terrestrial biosphere flux between the two land (NASA Ames CASA and CASA-GFED) models is 1.7 Pg C for 2009-2010. Ocean models (NOBM and ECCO2-Darwin) differ by 35 in their global estimates of carbon flux with particularly strong disagreement in high latitudes. Based upon combinations of terrestrial and ocean fluxes, GEOS-5 reasonably simulated the seasonal cycle observed at northern hemisphere surface sites and by the Greenhouse gases Observing SATellite (GOSAT) while the model struggled to simulate the seasonal cycle at southern hemisphere surface locations. Though GEOS-5 was able to reasonably reproduce the patterns of XCO2 observed by GOSAT, it struggled to reproduce these aspects of AIRS observations. Despite large differences between land and ocean flux estimates, resulting differences in atmospheric mixing ratio were small, typically less than 5 ppmv at the surface and 3 ppmv in the XCO2 column. A statistical analysis based on the variability of observations shows that flux differences of these magnitudes are difficult to distinguish from natural variability, regardless of measurement platform.

  10. Grid reliability

    Saiz, P; Rocha, R; Andreeva, J


    We are offering a system to track the efficiency of different components of the GRID. We can study the performance of both the WMS and the data transfers At the moment, we have set different parts of the system for ALICE, ATLAS, CMS and LHCb. None of the components that we have developed are VO specific, therefore it would be very easy to deploy them for any other VO. Our main goal is basically to improve the reliability of the GRID. The main idea is to discover as soon as possible the different problems that have happened, and inform the responsible. Since we study the jobs and transfers issued by real users, we see the same problems that users see. As a matter of fact, we see even more problems than the end user does, since we are also interested in following up the errors that GRID components can overcome by themselves (like for instance, in case of a job failure, resubmitting the job to a different site). This kind of information is very useful to site and VO administrators. They can find out the efficien...

  11. Using Water Vapor Isotope Observations from above the Greenland Ice Sheet to improve the Interpretation of Ice Core Water Stable Isotope Records

    Steen-Larsen, H. C.; Masson-Delmotte, V.; Risi, C. M.; Yoshimura, K.; Werner, M.; Butzin, M.; Brun, E.; Landais, A.; Bonne, J. L.; Dahl-Jensen, D.


    Water stable isotope data from Greenland ice cores provide key paleoclimatic information. For the purpose of improving the climatic interpretation from ice core records, a monitoring of the isotopic composition δ18O and δD at several height levels (up to 13 meter) of near-surface water vapor, precipitation and snow in the first 0.5 cm surface layer has been conducted during three summers (2010-2012) at NEEM, NW Greenland. We compare the observed water vapor isotopic composition with model outputs from three isotope-enabled general circulation models: LMDZiso, isoGSM, ECHAM-wiso. This allows us to benchmark the models and address effect of model resolution, effect of transport, effect of isotope parameterization, and representation of significant source region contributions. We find for all models that the simulated isotopic value δD are significantly biased towards too enriched values. A bias, which is only partly explained by the air temperature. The simulated amplitude in d-excess variations is ~50% smaller than observed and the simulated average summer level is ~10‰ lower than in observations. Using back trajectories we observe water vapor of Arctic origin to have a high d-excess fingerprint. This fingerprint is not observed in the GCMiso simulations indicating a problem of simulating accurately the Arctic hydrological cycle. The bias in the simulated δD and d-excess water vapor is similar to the already-documented bias in the simulated δD and d-excess of Greenland ice core records. This suggests that if we improve the simulation of the water vapor isotopic composition we might also improve the simulation of the ice core isotope record. During periods between precipitation events, our data demonstrate parallel changes of δ18O and d-excess in surface snow and near-surface vapor. The changes in δ18O of the vapor are similar or larger than those of the snow δ18O. It is estimated using the CROCUS snow model that 6 to 20% of the surface snow mass is

  12. Binaural electric-acoustic interactions recorded from the inferior colliculus of Guinea pigs: the effect of masking observed in the central nucleus of the inferior colliculus.

    Noh, Heil; Lee, Dong-Hee


    To investigate the electric-acoustic interactions within the inferior colliculus of guinea pigs and to observe how central masking appears in invasive neural recordings of the inferior colliculus (IC). A platinum-iridium wire was inserted to scala tympani through cochleostomy with a depth no greater than 1 mm for intracochlear stimulation of electric pulse train. A 5 mm 100 µm, single-shank, thin-film, penetrating recording probe was inserted perpendicularly to the surface of the IC in the coronal plane at an angle of 30-40° off the parasagittal plane with a depth of 2.0-2.5 mm. The peripheral and central masking effects were compared using electric pulse trains to the left ear and acoustic noise to the left ear (ipsilateral) and to the right ear (contralateral). Binaural acoustic stimuli were presented with different time delays and compared with combined electric and acoustic stimuli. The averaged evoked potentials and total spike numbers were measured using thin-film electrodes inserted into the central nucleus of the IC. Ipsilateral noise had more obvious effects on the electric response than did contralateral noise. Contralateral noise decreased slightly the response amplitude to the electric pulse train stimuli. Immediately after the onset of acoustic noise, the response pattern changed transiently with shorter response intervals. The effects of contralateral noise were evident at the beginning of the continuous noise. The total spike number decreased when the binaural stimuli reached the IC most simultaneously. These results suggest that central masking is quite different from peripheral masking and occurs within the binaural auditory system, and this study showed that the effect of masking could be observed in the IC recording. These effects are more evident and consistent with the psychophysical data from spike number analyses than with the previously reported gross potential data.

  13. Conditional Reliability Coefficients for Test Scores.

    Nicewander, W Alan


    The most widely used, general index of measurement precision for psychological and educational test scores is the reliability coefficient-a ratio of true variance for a test score to the true-plus-error variance of the score. In item response theory (IRT) models for test scores, the information function is the central, conditional index of measurement precision. In this inquiry, conditional reliability coefficients for a variety of score types are derived as simple transformations of information functions. It is shown, for example, that the conditional reliability coefficient for an ordinary, number-correct score, X, is equal to, ρ(X,X'|θ)=I(X,θ)/[I(X,θ)+1] Where: θ is a latent variable measured by an observed test score, X; p(X, X'|θ) is the conditional reliability of X at a fixed value of θ; and I(X, θ) is the score information function. This is a surprisingly simple relationship between the 2, basic indices of measurement precision from IRT and classical test theory (CTT). This relationship holds for item scores as well as test scores based on sums of item scores-and it holds for dichotomous as well as polytomous items, or a mix of both item types. Also, conditional reliabilities are derived for computerized adaptive test scores, and for θ-estimates used as alternatives to number correct scores. These conditional reliabilities are all related to information in a manner similar-or-identical to the 1 given above for the number-correct (NC) score. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Is quantitative electromyography reliable?

    Cecere, F; Ruf, S; Pancherz, H


    The reliability of quantitative electromyography (EMG) of the masticatory muscles was investigated in 14 subjects without any signs or symptoms of temporomandibular disorders. Integrated EMG activity from the anterior temporalis and masseter muscles was recorded bilaterally by means of bipolar surface electrodes during chewing and biting activities. In the first experiment, the influence of electrode relocation was investigated. No influence of electrode relocation on the recorded EMG signal could be detected. In a second experiment, three sessions of EMG recordings during five different chewing and biting activities were performed in the morning (I); 1 hour later without intermediate removal of the electrodes (II); and in the afternoon, using new electrodes (III). The method errors for different time intervals (I-II and I-III errors) for each muscle and each function were calculated. Depending on the time interval between the EMG recordings, the muscles considered, and the function performed, the individual errors ranged from 5% to 63%. The method error increased significantly (P masseter (mean 27.2%) was higher than for the temporalis (mean 20.0%). The largest function error was found during maximal biting in intercuspal position (mean 23.1%). Based on the findings, quantitative electromyography of the masticatory muscles seems to have a limited value in diagnostics and in the evaluation of individual treatment results.

  15. Testing different scenarios of emissions from global fossil fuel production using a multi-decadal record of simulated and observed ethane data

    Chung, L. B.; Butenhoff, C. L.; Rice, A. L.; Kahlil, A.


    Ethane is emitted with methane and other trace gases in the production of oil and natural gas during drilling, venting, flaring, and from infrastructure leaks during storage and distribution. Fugitive emissions from oil and gas production are one of the largest anthropogenic sources of ethane and methane and contribute significantly to global trends in their atmospheric burdens. The climate advantage of replacing coal with natural gas in energy portfolios depends critically on methane leakage rates. Because gas fluxes vary widely across production fields and distribution networks, efforts to estimate national and global rates of emissions using bottom-up accounting methods face significant challenges. Recent studies of firn and surface observations show a marked decline in global atmospheric ethane in the 1980s and 1990s which have been interpreted as a decline in fossil fuel ethane emissions. However, this conclusion is contradicted by some bottom-up emissions inventories and global inversions of isotopic methane which find fossil fuel emissions have been flat or increased during this time with a decline in biomass burning emissions. To investigate the temporal record of ethane emissions further, we use four decades (1982 - 2015) of surface air observations of ethane from three sampling networks to test competing ethane emissions scenarios evaluated with the three-dimensional atmospheric chemical-transport model GEOS-Chem. Because ethane's main sources (fossil fuel production, biomass and biofuel burning) have different spatial footprints, we hypothesize that temporal trends for each source, if they exist, will leave unique signatures in the latitudinal distribution of ethane over time. We use GEOS-Chem to predict the source latitudinal trend signatures and assess the ability of the observational ethane record to eliminate different emissions scenarios, providing insight into the recent history of fugitive emissions from fossil fuel production.

  16. The thermodynamic state of the Arctic atmosphere observed by AIRS: comparisons during the record minimum sea-ice extents of 2007 and 2012

    A. Devasthale


    Full Text Available The record sea-ice minimum (SIM extents observed during the summers of 2007 and 2012 in the Arctic are stark evidence of accelerated sea ice loss during the last decade. Improving our understanding of the Arctic atmosphere and accurate quantification of its characteristics becomes ever more crucial, not least to improve predictions of such extreme events in the future. In this context, the Atmospheric Infrared Sounder (AIRS instrument onboard NASA's Aqua satellite provides crucial insights due to its ability to provide 3-D information on atmospheric thermodynamics.

    Here, we facilitate comparisons in the evolution of the thermodynamic state of the Arctic atmosphere during these two SIM events using a decade long AIRS observational record (2003–2012. It is shown that the meteorological conditions during 2012 were not extreme but three factors in preconditioning from winter through early summer probably played an important role in accelerating sea-ice melt. First, the marginal sea-ice zones along the central Eurasian and North Atlantic sectors remained warm throughout winter and early spring in 2012 preventing thicker ice build-up. Second, the circulation pattern favoured efficient sea-ice transport out of the Arctic in the Atlantic sector during late spring and early summer in 2012 compared to 2007. Third, additional warming over the Canadian Archipelago and southeast Beaufort Sea from May onward further contributed to accelerated sea-ice melt. All these factors may have lead already thin and declining sea-ice cover to pass below the previous sea-ice extent minimum of 2007. In sharp contrast to 2007, negative surface temperature anomalies and increased cloudiness were observed over the East Siberian and Chukchi Seas in the summer of 2012. The results suggest that satellite-based monitoring of atmospheric preconditioning could be a critical source of information in predicting extreme sea-ice melting events in the Arctic.

  17. The thermodynamic state of the Arctic atmosphere observed by AIRS: comparisons during the record minimum sea ice extents of 2007 and 2012

    A. Devasthale


    Full Text Available The record sea ice minimum (SIM extents observed during the summers of 2007 and 2012 in the Arctic are stark evidence of accelerated sea ice loss during the last decade. Improving our understanding of the Arctic atmosphere and accurate quantification of its characteristics becomes ever more crucial, not least to improve predictions of such extreme events in the future. In this context, the Atmospheric Infrared Sounder (AIRS instrument onboard NASA's Aqua satellite provides crucial insights due to its ability to provide 3-D information on atmospheric thermodynamics. Here, we facilitate comparisons in the evolution of the thermodynamic state of the Arctic atmosphere during these two SIM events using a decade-long AIRS observational record (2003–2012. It is shown that the meteorological conditions during 2012 were not extreme, but three factors of preconditioning from winter through early summer played an important role in accelerating sea ice melt. First, the marginal sea ice zones along the central Eurasian and North Atlantic sectors remained warm throughout winter and early spring in 2012 preventing thicker ice build-up. Second, the circulation pattern favoured efficient sea ice transport out of the Arctic in the Atlantic sector during late spring and early summer in 2012 compared to 2007. Third, additional warming over the Canadian archipelago and southeast Beaufort Sea from May onward further contributed to accelerated sea ice melt. All these factors may have lead the already thin and declining sea ice cover to pass below the previous sea ice extent minimum of 2007. In sharp contrast to 2007, negative surface temperature anomalies and increased cloudiness were observed over the East Siberian and Chukchi seas in the summer of 2012. The results suggest that satellite-based monitoring of atmospheric preconditioning could be a critical source of information in predicting extreme sea ice melting events in the Arctic.

  18. Earliest recorded ground-based decameter wavelength observations of Saturn's lightning during the giant E-storm detected by Cassini spacecraft in early 2006

    Konovalenko, A. A.; Kalinichenko, N. N.; Rucker, H. O.; Lecacheux, A.; Fischer, G.; Zarka, P.; Zakharenko, V. V.; Mylostna, K. Y.; Grießmeier, J.-M.; Abranin, E. P.; Falkovich, I. S.; Sidorchuk, K. M.; Kurth, W. S.; Kaiser, M. L.; Gurnett, D. A.


    We report the history of the first recorded ground-based radio detection of Saturn's lightning using the Ukrainian UTR-2 radiotelescope at frequencies from 20 to 25 MHz. The observations were performed between 29 January and 3 February 2006, during which lighting activity (E-storm) on Saturn was detected by the radio experiment onboard Cassini spacecraft. The minimum detectable flux density (1σ-level) at UTR-2 reached 40 Jy (1Jy=10-26WmHz) for narrowband observations (Δf=10kHz) and 4 Jy for broadband observations (Δf=1MHz), for an effective telescope area of ≈100,000m and integration time of 20 ms. Selection criteria including comparison of simultaneous ON/OFF-source observations were applied to distinguish detection of lightning-associated radio pulses from interference. This allowed us to identify about 70 events with signal-to-noise ratio more than 5. Measured flux densities (between 50 and 700 Jy) and burst durations (between 60 and 220 ms) are in good agreement with extrapolation of previous Cassini measurements to a ground-based observer. This first detection demonstrates the possibility of Solar System planetary lightning studies using large, present and future ground-based radio instruments. The developed methods of observations and identification criteria are also implemented on the UTR-2 radio telescope for the investigation of the next Saturn's storms. Together with recently published UTR-2 measurements of activity measured after the 2006 storm reported here, the results have significant implications for detectable planetary radio emission in our Solar System and beyond.

  19. Source locations of teleseismic P, SV, and SH waves observed in microseisms recorded by a large aperture seismic array in China

    Liu, Qiaoxia; Koper, Keith D.; Burlacu, Relu; Ni, Sidao; Wang, Fuyun; Zou, Changqiao; Wei, Yunhao; Gal, Martin; Reading, Anya M.


    Transversely polarized seismic waves are routinely observed in ambient seismic energy across a wide range of periods, however their origin is poorly understood because the corresponding source regions are either undefined or weakly constrained, and nearly all models of microseism generation incorporate a vertically oriented single force as the excitation mechanism. To better understand the origin of transversely polarized energy in the ambient seismic wavefield we make the first systematic attempt to locate the source regions of teleseismic SH waves observed in microseismic (2.5-20 s) noise. We focus on body waves instead of surface waves because the source regions can be constrained in both azimuth and distance using conventional array techniques. To locate microseismic sources of SH waves (as well as SV and P waves) we continuously backproject the vertical, radial, and transverse components of the ambient seismic wavefield recorded by a large-aperture array deployed in China during 2013-2014. As expected, persistent P wave sources are observed in the North Atlantic, North Pacific, and Indian Oceans, mainly at periods of 2.5-10 s, in regions with the strong ocean wave interactions needed to produce secondary microseisms. SV waves are commonly observed to originate from locations indistinguishable from the P wave sources, but with smaller signal-to-noise ratios. We also observe SH waves with about half or less the signal-to-noise ratio of SV waves. SH source regions are definitively located in deep water portions of the Pacific, away from the sloping continental shelves that are thought to be important for the generation of microseismic Love waves, but nearby regions that routinely generate teleseismic P waves. The excitation mechanism for the observed SH waves may therefore be related to the interaction of P waves with small-wavelength bathymetric features, such as seamounts and basins, through some sort of scattering process.

  20. Insight into the function of the obturator internus muscle in humans: observations with development and validation of an electromyography recording technique.

    Hodges, Paul W; McLean, Linda; Hodder, Joanne


    There are no direct recordings of obturator internus muscle activity in humans because of difficult access for electromyography (EMG) electrodes. Functions attributed to this muscle are based on speculation and include hip external rotation/abduction, and a role in stabilization as an "adjustable ligament" of the hip. Here we present (1) a technique to insert intramuscular EMG electrodes into obturator internus plus (2) the results of an investigation of obturator internus activity relative to that of nearby hip muscles during voluntary hip efforts in two hip positions and a weight-bearing task. Fine-wire electrodes were inserted with ultrasound guidance into obturator internus, gluteus maximus, piriformis and quadratus femoris in ten participants. Participants performed ramped and maximal isometric hip efforts (open kinetic chain) into flexion/extension, abduction/adduction, and internal/external rotation, and hip rotation to end range in standing. Analysis of the relationship between activity of the obturator internus and the other hip muscles provided evidence of limited contamination of the recordings with crosstalk. Obturator internus EMG amplitude was greatest during hip extension, then external rotation then abduction, with minimal to no activation in other directions. Obturator internus EMG was more commonly the first muscle active during abduction and external rotation than other muscles. This study describes a viable and valid technique to record obturator internus EMG and provides the first evidence of its activation during simple functions. The observation of specificity of activation to certain force directions questions the hypothesis of a general role in hip stabilisation regardless of force direction.

  1. Population specific and up to date cardiovascular risk charts can be efficiently obtained with record linkage of routine and observational data.

    Faeh, David; Braun, Julia; Rufibach, Kaspar; Puhan, Milo A; Marques-Vidal, Pedro; Bopp, Matthias


    Only few countries have cohorts enabling specific and up-to-date cardiovascular disease (CVD) risk estimation. Individual risk assessment based on study samples that differ too much from the target population could jeopardize the benefit of risk charts in general practice. Our aim was to provide up-to-date and valid CVD risk estimation for a Swiss population using a novel record linkage approach. Anonymous record linkage was used to follow-up (for mortality, until 2008) 9,853 men and women aged 25-74 years who participated in the Swiss MONICA (MONItoring of trends and determinants in CVD) study of 1983-92. The linkage success was 97.8%, loss to follow-up 1990-2000 was 4.7%. Based on the ESC SCORE methodology (Weibull regression), we used age, sex, blood pressure, smoking, and cholesterol to generate three models. We compared the 1) original SCORE model with a 2) recalibrated and a 3) new model using the Brier score (BS) and cross-validation. Based on the cross-validated BS, the new model (BS = 14107×10(-6)) was somewhat more appropriate for risk estimation than the original (BS = 14190×10(-6)) and the recalibrated (BS = 14172×10(-6)) model. Particularly at younger age, derived absolute risks were consistently lower than those from the original and the recalibrated model which was mainly due to a smaller impact of total cholesterol. Using record linkage of observational and routine data is an efficient procedure to obtain valid and up-to-date CVD risk estimates for a specific population.

  2. Frontiers of reliability

    Basu, Asit P; Basu, Sujit K


    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  3. Evaluation of Inter-annual Variability and Trends of Cloud Liquid Water Path in Climate Models Using A Multi-decadal Record of Passive Microwave Observations

    Manaster, Andrew

    Long term satellite records of cloud changes have only been available for the past several decades and have just recently been used to diagnose cloud-climate feedbacks. However, due to issues with satellite drift, calibration, and other artifacts, the validity of these cloud changes has been called into question. It is therefore pertinent that we look for other observational datasets that can help to diagnose changes in variables relevant to cloud-radiation feedbacks. One such dataset is the Multisensor Advanced Climatology of Liquid Water Path (MAC-LWP), which blends cloud liquid water path (LWP) observations from 12 different passive microwave sensors over the past 27 years. In this study, observed LWP trends from the MAC-LWP dataset are compared to LWP trends from 16 models in the Coupled Model Intercomparison Project 5 (CMIP5) in order to assess how well the models capture these trends and thus related radiative forcing variables (e.g., cloud radiative forcing). (Abstract shortened by ProQuest.).

  4. A new perspective on the Fukushima releases brought by newly available air concentration observations (Tsuruta et al, 2014) and reliable meteorological fields

    Saunier, Olivier; Mathieu, Anne; Sekiyama, Thomas; Kajino, Mizuo; Adachi, Kouji; Bocquet, Marc; Igarashi, Yasuhito; Didier, Damien


    In case of nuclear power plant accident, the assessment of the temporal evolution in the amount of radionuclides released (source term) is required to evaluate human health and environment impacts. It is with in mind that IRSN has developed an operational tool based on inverse modeling techniques to evaluate the source term of a radioactive release. If the release amount is sufficiently strong as for the Fukushima accident, dose rate observations are primarily used to assess the source term (Saunier et al. 2013). Secondly, air concentrations measurements can also be used when available. For minor release events, air concentrations measurements are used. Five years after the Fukushima accident, many estimations of the source term based on the use of observations in the environment have been published. There is not yet consensus on the magnitudes on the releases rates, mainly due to the high uncertainties on meteorological fields used to assess the source term. Within the framework of cooperation between IRSN and Meteorological Research Institute (MRI) of Japan Meteorological Agency (JMA), meteorological fields with higher spatial resolution (3 km) have been used (Sekiyama et al. 2013) to improve the simulation of the atmospheric dispersion from the Fukushima accident. Besides, new dataset of Cs137 atmospheric concentration obtained from the sampling tapes of the Suspended Particle Matter (SPM) monitoring network by the method of Tsuruta et al. (2014) are available. These data are very useful since several plumes, unknown until now, could be identified in addition with the two major plumes on March 15 and March 21. Therefore, the inverse modeling method has been applied to assess a new source term using Tsuruta air concentration measurements, dose rate measurements and meteorological fields provided by MRI. The simulations performed using this new inverted source term help enhance our knowledge about the Fukushima accident. Several releases events are better

  5. Delta-Reliability

    Eugster, P.; Guerraoui, R.; Kouznetsov, P.


    This paper presents a new, non-binary measure of the reliability of broadcast algorithms, called Delta-Reliability. This measure quantifies the reliability of practical broadcast algorithms that, on the one hand, were devised with some form of reliability in mind, but, on the other hand, are not considered reliable according to the ``traditional'' notion of broadcast reliability [HT94]. Our specification of Delta-Reliability suggests a further step towards bridging the gap between theory and...

  6. Reliability computation from reliability block diagrams

    Chelson, P. O.; Eckstein, E. Y.


    Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.

  7. Jens Esmark's Christiania (Oslo) meteorological observations 1816-1838: the first long-term continuous temperature record from the Norwegian capital homogenized and analysed

    Hestmark, Geir; Nordli, Øyvind


    In 2010 we rediscovered the complete set of meteorological observation protocols made by Jens Esmark (1762-1839) during his years of residence in the Norwegian capital of Oslo (then Christiania). From 1 January 1816 to 25 January 1839, Esmark at his house in Øvre Voldgate in the morning, early afternoon and late evening recorded air temperature with state-of-the-art thermometers. He also noted air pressure, cloud cover, precipitation and wind directions, and experimented with rain gauges and hygrometers. From 1818 to the end of 1838 he twice a month provided weather tables to the official newspaper Den Norske Rigstidende, and thus acquired a semi-official status as the first Norwegian state meteorologist. This paper evaluates the quality of Esmark's temperature observations and presents new metadata, new homogenization and analysis of monthly means. Three significant shifts in the measurement series were detected, and suitable corrections are proposed. The air temperature in Oslo during this period is shown to exhibit a slow rise from 1816 towards 1825, followed by a slighter fall again towards 1838.

  8. Relationships between columnar aerosol optical properties and surface particulate matter observations in north-central Spain from long-term records (2003–2011

    Y. S. Bennouna


    Full Text Available This work examines the relationships between Aerosol Optical Depth (AOD and Particulate Matter (PMX parameters, based on long records (2003–2011 of two nearby sites from the AERONET and EMEP networks in the north-central area of Spain. The climatological annual cycle of PM10 and PM2.5 present a bimodality which might be partly due to desert dust intrusions, a pattern which does not appear in the annual cycle of the AOD. In the case of the AOD, this bimodality is likely to be masked because of the poor sampling of sunphotometer data as compared to PMX (67% of days against 90%, and this fact stresses the necessity of long-term observations. In monthly series, significant interannual variations are observed and most extrema coincide, however the bimodal shape remains relatively stable for PMX. Significant and consistent trends were found for both datasets likely associated to a decrease of desert dust apportionment until 2009. PM10 and AOD daily data are moderately correlated (0.56, a correlation improving for monthly means (0.70. In the case of strong desert dust events day-to-day correlation is not systematic, therefore an extensive analysis on PMX, fine-PM ratio, AOD and associated Ångström exponent (α is carried out.

  9. Distribution Equipment Reliability Data; Tillfoerlitlighetsdata paa komponent nivaa foer eldistributionsnaet

    Ying He (Vattenfall Research and Development AB, Stockholm (SE))


    In risk analysis of a power system, the risk for the system to fail power supply is calculated from the knowledge of the reliability data of individual system components. Meaningful risk analysis requires reasonable and acceptable data. The quality of the data has the fundamental importance for the analysis. However, the valid data are expensive to collect. The component reliability performance statistics are not easy to obtain. This report documents the distribution equipment reliability data developed by the project 'Component Reliability Data for Risk Analysis of Distribution Systems' within the Elforsk RandD program 'Risk Analysis 06-10'. The project analyzed a large sample size of distribution outages recorded by more than a hundred power utilities in Sweden during 2004-2005, and derived the equipment reliability data nationwide. The detailed summaries of these data are presented in the appendices of the report. The component reliability was also investigated at a number of power utilities including Vattenfall Eldistribution AB, Goeteborg Energi Naet AB, E.ON Elnaet Sverige AB, Fortum Distribution, and Linde Energi AB. The reliability data were derived for individual utilities. The detailed data lists and failure statistics are summarized in the appendices for each participating companies. The data provided in this report are developed based on a large sample size of field outage records and can be therefore used as generic data in system risk analysis and reliability studies. In order to provide more references and complementary data, the equipment reliability surveys conducted by IEEE were studied in the project. The most significant results obtained by the IEEE surveys are provided in the report. A summary of the reliability data surveyed by IEEE is presented in the appendix of the report. These data are suggested to use in the absence of better data being available. The reliability data estimates were derived for sustained failure rates

  10. Vessel Activity Record

    National Oceanic and Atmospheric Administration, Department of Commerce — The Vessel Activity Record is a bi-weekly spreadsheet that shows the status of fishing vessels. It records whether fishing vessels are fishing without an observer...

  11. VLSI Reliability in Europe

    Verweij, Jan F.


    Several issue's regarding VLSI reliability research in Europe are discussed. Organizations involved in stimulating the activities on reliability by exchanging information or supporting research programs are described. Within one such program, ESPRIT, a technical interest group on IC reliability was

  12. Reliability of Circumplex Axes

    Micha Strack


    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  13. New World Pouzozlia and Boehmeria (Urticaceae): a new species and new generic record for Paraguay, Pouzolzia amambaiensis, and additional observations on already described species of both genera

    Wilmot-Dear, Christine Melanie; Friis, Ib


    The paper supplements a revision of the New World species of Boehmeria and Pouzozia published by the authors in 1996. Pouzolzia amambaiensis sp. nov. is described from recent material from Paraguay near the border with Brazil and represents a new generic record for Paraguay. Also recorded...

  14. Reliability Generalization: "Lapsus Linguae"

    Smith, Julie M.


    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  15. Records Management

    U.S. Environmental Protection Agency — All Federal Agencies are required to prescribe an appropriate records maintenance program so that complete records are filed or otherwise preserved, records can be...

  16. Lower incidence of recorded cardiovascular outcomes in patients with type 2 diabetes using insulin aspart vs. those on human regular insulin: observational evidence from general practices.

    Rathmann, W; Kostev, K


    Insulin aspart has a higher ability to treat postprandial glucose than regular human insulin, which may have favourable cardiovascular effects. The aim was to collect and compare the incidence of recorded macro- and microvascular events in patients with type 2 diabetes with insulin aspart or regular human insulin in general practices. Computerized data from 3154 aspart and 3154 regular insulin users throughout Germany (Disease Analyzer, January 2000 to July 2011) were analysed after matching for age (60 ± 10 years), sex (men: 57%), health insurance (private: 5.8%) and diabetes treatment period in practice (2.2 ± 2.5 years). Hazard ratios (HR; Cox regression) for macro- or microvascular outcomes (follow-up: 3.5 years) were further adjusted for diabetologist care, practice region, hypertension, hyperlipidaemia, co-medication (basal insulin, oral antidiabetics, antihypertensives, lipid-lowering agents and antithrombotic drugs), previous treatment with rapid-acting insulins, hypoglycaemia and the Charlson co-morbidity score. Furthermore, adjustment was carried out for baseline microvascular complications when analysing macrovascular outcomes and vice versa. Overall, the risk of combined macrovascular outcomes was 15% lower for insulin aspart users (p = 0.01). For insulin aspart there was also a decreased risk incident stroke [HR: 0.58; 95% confidence interval (CI): 0.45-0.74], myocardial infarction (HR: 0.69; 95% CI: 0.54-0.88) and peripheral vascular disease (HR: 0.80; 95% CI: 0.69-0.93). For microvascular complications (retinopathy, neuropathy and nephropathy), no significant differences were observed (HR: 0.96; 95% CI: 0.87-1.06). Use of the rapid-acting insulin analogue aspart was associated with a reduced incidence of macrovascular outcomes in type 2 diabetes in general practices. It is important to confirm this finding in a randomized controlled trial. © 2012 Blackwell Publishing Ltd.

  17. Biomarker records in penguin droppings and observed changes in penguin communities and their response to the ENSO in the Western Antarctic

    ZHANG HaiSheng; LU DouDing; YU PeiSong; ZHANG WeiGuo; LU Bing; Hans-Ulrich PETER; Walter VETTER


    Lipid biomarkers in AD2 penguin droppings-amend soil core from the Ardley Island,Western Antarctic,were dated using 210Pb.Changes in the fatty acid ratios of nC18∶ 2/nC18∶ 0 from the penguin droppings reflect climate changes coincident with ENNO events during 1931-2006.The occurrence of the minimum values in the depth of 2-3 and 6-7 cm are consistent with the end of ENSO in 1958 and 1983,respectively,reflecting a lag of the biomarker records in AD2 penguin droppings-amend soil in climatic signatures.This study also reveals that the changes in the relative concentration of n-alkanes nC23,the ratios of nC23/nC17 and ZnC21-/ΣnC22+,and carbon preferential index (CPI) values collectively indicate the variations of soil microorganism and lower plant,which are closely related to climate changes.The ratios of bacterial fatty acids iC15:0/aC15∶0 reflect the increasing significance of microorganism.activities during the two periods that occurred at the end years of ENSO.Decrease in CPIA value and increase in ΣnC21-/ΣnC22+ indicate that low molecular weight fatty acids are derived from microorganism; and their insignificant correlation with Pr/Ph suggests microorganisms play an important role in the relatively simply ecosystem in the Antarctic and are closely linked to climatic conditions.In addition,the observed penguin community indicates the population of penguin can largely reflect the impacts of global climate changes on the ecosystem.

  18. Policy implications of using a household consumption and expenditures survey versus an observed-weighed food record survey to design a food fortification program.

    Lividini, Keith; Fiedler, John L; Bermudez, Odilia I


    Observed-Weighed Food Record Surveys (OWFR) are regarded as the most precise dietary assessment methodology, despite their recognized shortcomings, which include limited availability, high cost, small samples with uncertain external validity that rarely include all household members, Hawthorne effects, and using only 1 or 2 days to identify "usual intake." Although Household Consumption and Expenditures Surveys (HCES) also have significant limitations, they are increasingly being used to inform nutrition policy To investigate differences in fortification simulations based on OWFR and HCES from Bangladesh. The pre- and postfortification nutrient intake levels from the two surveys were compared. The total population-based rank orderings of oil, wheat flour, and sugar coverage were identical for the two surveys. OWFR found differences in women's and children's coverage rates and average quantities consumed for all three foods that were not detected by HCES. Guided by the Food Fortification Formulator, we found that these differences did not result in differences in recommended fortification levels. Differences were found, however, in estimated impacts: although both surveys found that oil would be effective in reducing the prevalence of inadequate vitamin A intake among both subpopulations, only OWFR also found that sugar and wheat flour fortification would significantly reduce inadequate vitamin A intake among children. Despite the less precise measure of food consumption from HCES, the two surveys provide similar guidance for designing a fortification program. The external validity of these findings is limited. With relatively minor modifications, the precision of HCES in dietary assessment and the use ofHCES in fortification programming could be strengthened.

  19. Large surface meltwater discharge from the Kangerlussuaq sector of the Greenland ice sheet during the record-warm year 2010 explained by detailed energy balance observations

    D. van As


    Full Text Available This study uses data from six on-ice weather stations, calibrated MODIS-derived albedo and proglacial river gauging measurements to drive and validate an energy balance model. We aim to quantify the record-setting positive temperature anomaly in 2010 and its effect on mass balance and runoff from the Kangerlussuaq sector of the Greenland ice sheet. In 2010, the average temperature was 4.9 °C (2.7 standard deviations above the 1974–2010 average in Kangerlussuaq. High temperatures were also observed over the ice sheet, with the magnitude of the positive anomaly increasing with altitude, particularly in August. Simultaneously, surface albedo was anomalously low in 2010, predominantly in the upper ablation zone. The low albedo was caused by high ablation, which in turn profited from high temperatures and low winter snowfall. Surface energy balance calculations show that the largest melt excess (∼170% occurred in the upper ablation zone (above 1000 m, where higher temperatures and lower albedo contributed equally to the melt anomaly. At lower elevations the melt excess can be attributed to high atmospheric temperatures alone. In total, we calculate that 6.6 ± 1.0 km3 of surface meltwater ran off the ice sheet in the Kangerlussuaq catchment in 2010, exceeding the reference year 2009 (based on atmospheric temperature measurements by ∼150%. During future warm episodes we can expect a melt response of at least the same magnitude, unless a larger wintertime snow accumulation delays and moderates the melt-albedo feedback. Due to the hypsometry of the ice sheet, yielding an increasing surface area with elevation, meltwater runoff will be further amplified by increases in melt forcings such as atmospheric heat.

  20. Maximum phonation time: variability and reliability.

    Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W


    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.

  1. Assuring reliability program effectiveness.

    Ball, L. W.


    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  2. The Accelerator Reliability Forum

    Lüdeke, Andreas; Giachino, R


    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  3. An investigation of the reliability of Rapid Upper Limb Assessment (RULA) as a method of assessment of children's computing posture.

    Dockrell, Sara; O'Grady, Eleanor; Bennett, Kathleen; Mullarkey, Clare; Mc Connell, Rachel; Ruddy, Rachel; Twomey, Seamus; Flannery, Colleen


    Rapid Upper Limb Assessment (RULA) is a quick observation method of posture analysis. RULA has been used to assess children's computer-related posture, but the reliability of RULA on a paediatric population has not been established. The purpose of this study was to investigate the inter-rater and intra-rater reliability of the use of RULA with children. Video recordings of 24 school children were independently viewed by six trained raters who assessed their postures using RULA, on two separate occasions. RULA demonstrated higher intra-rater reliability than inter-rater reliability although both were moderate to good. RULA was more reliable when used for assessing the older children (8-12 years) than with the younger children (4-7 years). RULA may prove useful as part of an ergonomic assessment, but its level of reliability warrants caution for its sole use when assessing children, and in particular, younger children.

  4. Embedded-structure template for electronic records affects patient note quality and management for emergency head injury patients: An observational pre and post comparison quality improvement study.

    Sonoo, Tomohiro; Iwai, Satoshi; Inokuchi, Ryota; Gunshin, Masataka; Kitsuta, Yoichi; Nakajima, Susumu


    Along with article-based checklists, structured template recording systems have been reported as useful to create more accurate clinical recording, but their contributions to the improvement of the quality of patient care have been controversial. An emergency department (ED) must manage many patients in a short time. Therefore, such a template might be especially useful, but few ED-based studies have examined such systems.A structured template produced according to widely used head injury guidelines was used by ED residents for head injury patients. The study was conducted by comparing each 6-month period before and after launching the system. The quality of the patient notes and factors recorded in the patient notes to support the head computed tomography (CT) performance were evaluated by medical students blinded to patient information.The subject patients were 188 and 177 in respective periods. The numbers of patient notes categorized as "CT indication cannot be determined" were significantly lower in the postintervention term (18% → 9.0%), which represents the patient note quality improvement. No difference was found in the rates of CT performance or CT skip without clearly recorded CT indication in the patient notes.The structured template functioned as a checklist to support residents in writing more appropriately recorded patient notes in the ED head injury patients. Such a template customized to each clinical condition can facilitate standardized patient management and can improve patient safety in the ED.

  5. Observational Review and Analysis of Concussion: a Method for Conducting a Standardized Video Analysis of Concussion in Rugby League

    Gardner, Andrew J; Levi, Christopher R; Iverson, Grant L


    .... The aim of this study is to evaluate whether independent raters reliably agreed on the injury characterization when using a standardized observational instrument to record video footage of National Rugby League (NRL...

  6. Enlightenment on Computer Network Reliability From Transportation Network Reliability

    Hu Wenjun; Zhou Xizhao


    Referring to transportation network reliability problem, five new computer network reliability definitions are proposed and discussed. They are computer network connectivity reliability, computer network time reliability, computer network capacity reliability, computer network behavior reliability and computer network potential reliability. Finally strategies are suggested to enhance network reliability.

  7. Human Reliability Program Overview

    Bodin, Michael


    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  8. Power electronics reliability analysis.

    Smith, Mark A.; Atcitty, Stanley


    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  9. Methodology for Naturalistic Observation of Therapist Behavior in Group Psychotherapy.

    Weiss, Leslie Bloch

    This paper presents a research method derived from the functional analysis of behavior currently common among operant behavior therapists. Naturalistic observation, the method used, encompasses behavioral-level description of events, systematic observation and recording by means of codes, assessment of inter-judge reliability, as well as targeting…

  10. Reliable Design Versus Trust

    Berg, Melanie; LaBel, Kenneth A.


    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  11. Large surface meltwater discharge from the Kangerlussuaq sector of the Greenland ice sheet during the record-warm year 2010 explained by detailed energy balance observations

    van As, D.; Hubbard, A.L.; Hasholt, B.; Mikkelsen, A.B.; van den Broeke, M.R.|info:eu-repo/dai/nl/073765643; Fausto, R.S.


    This study uses data from six on-ice weather stations, calibrated MODIS-derived albedo and proglacial river gauging measurements to drive and validate an energy balance model. We aim to quantify the record-setting positive temperature anomaly in 2010 and its effect on mass balance and runoff from

  12. Chapter 15: Reliability of Wind Turbines

    Sheng, Shuangwen; O' Connor, Ryan


    The global wind industry has witnessed exciting developments in recent years. The future will be even brighter with further reductions in capital and operation and maintenance costs, which can be accomplished with improved turbine reliability, especially when turbines are installed offshore. One opportunity for the industry to improve wind turbine reliability is through the exploration of reliability engineering life data analysis based on readily available data or maintenance records collected at typical wind plants. If adopted and conducted appropriately, these analyses can quickly save operation and maintenance costs in a potentially impactful manner. This chapter discusses wind turbine reliability by highlighting the methodology of reliability engineering life data analysis. It first briefly discusses fundamentals for wind turbine reliability and the current industry status. Then, the reliability engineering method for life analysis, including data collection, model development, and forecasting, is presented in detail and illustrated through two case studies. The chapter concludes with some remarks on potential opportunities to improve wind turbine reliability. An owner and operator's perspective is taken and mechanical components are used to exemplify the potential benefits of reliability engineering analysis to improve wind turbine reliability and availability.

  13. Viking Lander reliability program

    Pilny, M. J.


    The Viking Lander reliability program is reviewed with attention given to the development of the reliability program requirements, reliability program management, documents evaluation, failure modes evaluation, production variation control, failure reporting and correction, and the parts program. Lander hardware failures which have occurred during the mission are listed.

  14. Reliability and agreement in student ratings of the class environment.

    Nelson, Peter M; Christ, Theodore J


    The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were evaluated as each index provides important information for different interpretations and uses of student rating scale data. Data for 84 classes across 29 teachers in a suburban middle school were sampled to derive reliability and agreement indices for the REACT subscales across 4 class sizes: 25, 20, 15, and 10. All participating teachers were White and a larger number of 6th-grade classes were included (42%) relative to 7th- (33%) or 8th- (23%) grade classes. Teachers were responsible for a variety of content areas, including language arts (26%), science (26%), math (20%), social studies (19%), communications (6%), and Spanish (3%). Coefficient alpha estimates were generally high across all subscales and class sizes (α = .70-.95); class-mean estimates were greatly impacted by the number of students sampled from each class, with class-level reliability values generally falling below .70 when class size was reduced from 25 to 20. Further, within-class student agreement varied widely across the REACT subscales (mean agreement = .41-.80). Although coefficient alpha and test-retest reliability are commonly reported in research with student rating scales, class-level reliability and agreement are not. The observed differences across coefficient alpha, class-level reliability, and agreement indices provide evidence for evaluating students' ratings of the class environment according to their intended use (e.g., differentiating between classes, class-level instructional decisions). (PsycINFO Database Record

  15. Wind turbine reliability database update.

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.


    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  16. Record Statistics and Dynamics

    Sibani, Paolo; Jensen, Henrik J.


    The term record statistics covers the statistical properties of records within an ordered series of numerical data obtained from observations or measurements. A record within such series is simply a value larger (or smaller) than all preceding values. The mathematical properties of records strongly...... fluctuations of e. g. the energy are able to push the system past some sort of ‘edge of stability’, inducing irreversible configurational changes, whose statistics then closely follows the statistics of record fluctuations....

  17. Observation, observation, observation

    Denise Pumain


    After having deplored for a long time the rarity, the slow pace of preparation and the lack of access to what seemed to be mountainous terrains, big data has been created, emerging in fluid masses, wave after wave, to become a new sphere of knowledge, useful for human and social sciences. Because many of the systems that record them contain a geo-positioning device - from the GPS, the mobile phone, the surveillance cameras, sensors hidden in vehicles or scattered in the environment to the vir...

  18. Bank Record Processing


    Barnett Banks of Florida, Inc. operates 150 banking offices in 80 Florida cities. Banking offices have computerized systems for processing deposits or withdrawals in checking/savings accounts, and for handling commercial and installment loan transactions. In developing a network engineering design for the terminals used in record processing, an affiliate, Barnett Computing Company, used COSMIC's STATCOM program. This program provided a reliable network design tool and avoided the cost of developing new software.

  19. Reliability of pre- and intraoperative tests for biliary lithiasis

    Escallon, A. Jr.; Rosales, W.; Aldrete, J.S.


    The records of 242 patients, operated consecutively for biliary lithiasis, were analyzed to determine the reliability of oral cholecystography (OCG), ultrasonography (US), and HIDA in detecting biliary calculi. Preoperative interpretations were correlated to operative findings. OCG obtained in 138 patients was accurate in 92%. US obtained in 150 was correct in 89%. The accuracy of HIDA was 92% in acute and 78% in chronic cholecystitis. Intraoperative cholangiography (IOC) done in 173 patients indicated the need for exploratory choledochotomy in 24; 21 had choledocholithiasis. These observations suggest that OCG and US are very accurate, but not infallible, in detecting cholelithiasis. US should be done first; when doubt persists, the addition of OCG allows the preoperative diagnosis of gallstones in 97% of the cases. HIDA is highly accurate but not infallible in detecting acute calculous cholecystitis. IOC is very reliable in detecting choledocholithiasis; thus, its routine is justifiable.

  20. Reliability of pre- and intraoperative tests for biliary lithiasis.

    Escallon, A; Rosales, W; Aldrete, J S


    The records of 242 patients, operated consecutively for biliary lithiasis, were analyzed to determine the reliability of oral cholecystography (OCG), ultrasonography (US), and HIDA in detecting biliary calculi. Preoperative interpretations were correlated to operative findings. OCG obtained in 138 patients was accurate in 92%. US obtained in 150 was correct in 89%. The accuracy of HIDA was 92% in acute and 78% in chronic cholecystitis. Intraoperative cholangiography (IOC) done in 173 patients indicated the need for exploratory choledochotomy in 24; 21 had choledocholithiasis. These observations suggest that OCG and US are very accurate, but not infallible, in detecting cholelithiasis. US should be done first; when doubt persists, the addition of OCG allows the preoperative diagnosis of gallstones in 97% of the cases. HIDA is highly accurate but not infallible in detecting acute calculous cholecystitis. IOC is very reliable in detecting choledocholithiasis; thus, its routine is justifiable. PMID:3888131

  1. Reliability and safety engineering

    Verma, Ajit Kumar; Karanki, Durga Rao


    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  2. Measurement System Reliability Assessment

    Kłos Ryszard


    Full Text Available Decision-making in problem situations is based on up-to-date and reliable information. A great deal of information is subject to rapid changes, hence it may be outdated or manipulated and enforce erroneous decisions. It is crucial to have the possibility to assess the obtained information. In order to ensure its reliability it is best to obtain it with an own measurement process. In such a case, conducting assessment of measurement system reliability seems to be crucial. The article describes general approach to assessing reliability of measurement systems.

  3. Reliable knowledge discovery

    Dai, Honghua; Smirnov, Evgueni


    Reliable Knowledge Discovery focuses on theory, methods, and techniques for RKDD, a new sub-field of KDD. It studies the theory and methods to assure the reliability and trustworthiness of discovered knowledge and to maintain the stability and consistency of knowledge discovery processes. RKDD has a broad spectrum of applications, especially in critical domains like medicine, finance, and military. Reliable Knowledge Discovery also presents methods and techniques for designing robust knowledge-discovery processes. Approaches to assessing the reliability of the discovered knowledge are introduc

  4. Reliability of fluid systems

    Kopáček Jaroslav


    Full Text Available This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element, which is seen as a random variable and their data (values can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.

  5. Circuit design for reliability

    Cao, Yu; Wirth, Gilson


    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  6. Robert Recorde

    Williams, Jack


    The 16th-Century intellectual Robert Recorde is chiefly remembered for introducing the equals sign into algebra, yet the greater significance and broader scope of his work is often overlooked. This book presents an authoritative and in-depth analysis of the man, his achievements and his historical importance. This scholarly yet accessible work examines the latest evidence on all aspects of Recorde's life, throwing new light on a character deserving of greater recognition. Topics and features: presents a concise chronology of Recorde's life; examines his published works; describes Recorde's pro

  7. Strong-motion observations recorded in strategic public buildings during the 24 August 2016 Mw 6.0 Amatrice (central Italy earthquake

    Chiara Ladina


    Full Text Available The Marche Region, in collaboration with INGV, has promoted a project to monitoring public strategic buildings with permanent accelerometer installed at the base of the structures. Public structures play a primary role to maintain the functionality of a local community. Information about vibratory characteristics of the building and subsoil, in addition to the seismic instrumental history that describe the seismic shaking at the base of the structure are collected for each buildings. The real-time acquisition of seismic data allows to obtain accelerometric time history soon after the occurrence of an earthquake. The event of 24 August 2016 in Central Italy was an opportunity to test the functionality of this implemented system. In this work the parameters obtained from strong motion data recorded at the base of the structures were analyzed and the values obtained were inserted with some empirical relationships used to provide intensity microseismic values and damage indices.

  8. LED system reliability

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.


    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific ex

  9. Principles of Bridge Reliability

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated...

  10. Improving machinery reliability

    Bloch, Heinz P


    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  11. Hawaii Electric System Reliability

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  12. Hawaii electric system reliability.

    Silva Monroy, Cesar Augusto; Loose, Verne William


    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  13. Interrater reliability of the needle examination in lumbosacral radiculopathy.

    Kendall, Richard; Werner, Robert A


    Low back pain and lumbar radiculopathy are among the most common painful disorders affecting the adult population. This study hypothesizes that there is good correlation between the diagnostic impression of an unblinded electromyographer, using clinical and electromyographic information, and an independent electromyographer, who uses the needle examination only to assess for lumbar radiculopathy. This is a prospective, single-blinded, observational pilot study. The needle examination was electronically recorded, reproduced, and shown to a second examiner, blinded to all clinical data. Diagnostic impressions from both examiners were recorded and evaluated for agreement. Six recorded cases were reviewed by 66 blinded examiners. Overall diagnostic agreement was 46.9% (60.5% faculty level, 28.5% resident level). Logistic regression shows a strong association between training level and agreement on diagnostic impression (odds ratio, 1.9; 95% confidence interval, 1.12-3.22; P = 0.019). This study shows that there is fair interrater reliability between faculty-level examiners and poor reliability among resident-level examiners when the needle examination is used to evaluate patients with lumbar radiculopathy.

  14. Relative accuracy and availability of an Irish National Database of dispensed medication as a source of medication history information: observational study and retrospective record analysis.

    Grimes, T


    WHAT IS KNOWN AND OBJECTIVE: The medication reconciliation process begins by identifying which medicines a patient used before presentation to hospital. This is time-consuming, labour intensive and may involve interruption of clinicians. We sought to identify the availability and accuracy of data held in a national dispensing database, relative to other sources of medication history information. METHODS: For patients admitted to two acute hospitals in Ireland, a Gold Standard Pre-Admission Medication List (GSPAML) was identified and corroborated with the patient or carer. The GSPAML was compared for accuracy and availability to PAMLs from other sources, including the Health Service Executive Primary Care Reimbursement Scheme (HSE-PCRS) dispensing database. RESULTS: Some 1111 medication were assessed for 97 patients, who were median age 74 years (range 18-92 years), median four co-morbidities (range 1-9), used median 10 medications (range 3-25) and half (52%) were male. The HSE-PCRS PAML was the most accurate source compared to lists provided by the general practitioner, community pharmacist or cited in previous hospital documentation: the list agreed for 74% of the medications the patients actually used, representing complete agreement for all medications in 17% of patients. It was equally contemporaneous to other sources, but was less reliable for male than female patients, those using increasing numbers of medications and those using one or more item that was not reimbursable by the HSE. WHAT IS NEW AND CONCLUSION: The HSE-PCRS database is a relatively accurate, available and contemporaneous source of medication history information and could support acute hospital medication reconciliation.

  15. Chapter 9: Reliability

    Algora, Carlos; Espinet-Gonzalez, Pilar; Vazquez, Manuel; Bosco, Nick; Miller, David; Kurtz, Sarah; Rubio, Francisca; McConnell,Robert


    This chapter describes the accumulated knowledge on CPV reliability with its fundamentals and qualification. It explains the reliability of solar cells, modules (including optics) and plants. The chapter discusses the statistical distributions, namely exponential, normal and Weibull. The reliability of solar cells includes: namely the issues in accelerated aging tests in CPV solar cells, types of failure and failures in real time operation. The chapter explores the accelerated life tests, namely qualitative life tests (mainly HALT) and quantitative accelerated life tests (QALT). It examines other well proven and experienced PV cells and/or semiconductor devices, which share similar semiconductor materials, manufacturing techniques or operating conditions, namely, III-V space solar cells and light emitting diodes (LEDs). It addresses each of the identified reliability issues and presents the current state of the art knowledge for their testing and evaluation. Finally, the chapter summarizes the CPV qualification and reliability standards.

  16. Estimation of 1-D velocity models beneath strong-motion observation sites in the Kathmandu Valley using strong-motion records from moderate-sized earthquakes

    Bijukchhen, Subeg M.; Takai, Nobuo; Shigefuji, Michiko; Ichiyanagi, Masayoshi; Sasatani, Tsutomu; Sugimura, Yokito


    The Himalayan collision zone experiences many seismic activities with large earthquakes occurring at certain time intervals. The damming of the proto-Bagmati River as a result of rapid mountain-building processes created a lake in the Kathmandu Valley that eventually dried out, leaving thick unconsolidated lacustrine deposits. Previous studies have shown that the sediments are 600 m thick in the center. A location in a seismically active region, and the possible amplification of seismic waves due to thick sediments, have made Kathmandu Valley seismically vulnerable. It has suffered devastation due to earthquakes several times in the past. The development of the Kathmandu Valley into the largest urban agglomerate in Nepal has exposed a large population to seismic hazards. This vulnerability was apparent during the Gorkha Earthquake (Mw7.8) on April 25, 2015, when the main shock and ensuing aftershocks claimed more than 1700 lives and nearly 13% of buildings inside the valley were completely damaged. Preparing safe and up-to-date building codes to reduce seismic risk requires a thorough study of ground motion amplification. Characterizing subsurface velocity structure is a step toward achieving that goal. We used the records from an array of strong-motion accelerometers installed by Hokkaido University and Tribhuvan University to construct 1-D velocity models of station sites by forward modeling of low-frequency S-waves. Filtered records (0.1-0.5 Hz) from one of the accelerometers installed at a rock site during a moderate-sized (mb4.9) earthquake on August 30, 2013, and three moderate-sized (Mw5.1, Mw5.1, and Mw5.5) aftershocks of the 2015 Gorkha Earthquake were used as input motion for modeling of low-frequency S-waves. We consulted available geological maps, cross-sections, and borehole data as the basis for initial models for the sediment sites. This study shows that the basin has an undulating topography and sediment sites have deposits of varying thicknesses

  17. Validity of recalled v. recorded birth weight: a systematic review and meta-analysis

    Shenkin, S. D.; Zhang, M.G.; De, G.; Mathur, S.; Mina, T.H.; Reynolds, R. M.


    Low birth weight is associated with adverse health outcomes. If birth weight records are not available, studies may use recalled birth weight. It is unclear whether this is reliable. We performed a systematic review and meta-analysis of studies comparing recalled with recorded birth weights. We followed the Meta-Analyses of Observational Studies in Epidemiology (MOOSE) statement and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched MEDLINE, EM...

  18. Four Forensic Entomology Case Studies: Records and Behavioral Observations on Seldom Reported Cadaver Fauna With Notes on Relevant Previous Occurrences and Ecology.

    Lindgren, Natalie K; Sisson, Melissa S; Archambeault, Alan D; Rahlwes, Brent C; Willett, James R; Bucheli, Sibyl R


    A yearlong survey of insect taxa associated with human decomposition was conducted at the Southeast Texas Applied Forensic Science (STAFS) facility located in the Center for Biological Field Studies of Sam Houston State University in Huntsville, TX. During this study, four insect-cadaver interactions were observed that represent previously poorly documented yet forensically significant interactions: Syrphidae maggots colonized a corpse in an aquatic situation; Psychodidae adults mated and oviposited on an algal film that was present on a corpse that had been recently removed from water; several Panorpidae were the first insects to feed upon a freshly placed corpse in the autumn; and a noctuid caterpillar was found chewing and ingesting dried human skin. Baseline knowledge of insect-cadaver interactions is the foundation of forensic entomology, and unique observations have the potential to expand our understanding of decomposition ecology.

  19. Reliability-Centric High-Level Synthesis

    Tosun, S; Arvas, E; Kandemir, M; Xie, Yuan


    Importance of addressing soft errors in both safety critical applications and commercial consumer products is increasing, mainly due to ever shrinking geometries, higher-density circuits, and employment of power-saving techniques such as voltage scaling and component shut-down. As a result, it is becoming necessary to treat reliability as a first-class citizen in system design. In particular, reliability decisions taken early in system design can have significant benefits in terms of design quality. Motivated by this observation, this paper presents a reliability-centric high-level synthesis approach that addresses the soft error problem. The proposed approach tries to maximize reliability of the design while observing the bounds on area and performance, and makes use of our reliability characterization of hardware components such as adders and multipliers. We implemented the proposed approach, performed experiments with several designs, and compared the results with those obtained by a prior proposal.

  20. Photovoltaic system reliability

    Maish, A.B.; Atcitty, C. [Sandia National Labs., NM (United States); Greenberg, D. [Ascension Technology, Inc., Lincoln Center, MA (United States)] [and others


    This paper discusses the reliability of several photovoltaic projects including SMUD`s PV Pioneer project, various projects monitored by Ascension Technology, and the Colorado Parks project. System times-to-failure range from 1 to 16 years, and maintenance costs range from 1 to 16 cents per kilowatt-hour. Factors contributing to the reliability of these systems are discussed, and practices are recommended that can be applied to future projects. This paper also discusses the methodology used to collect and analyze PV system reliability data.

  1. Structural Reliability Methods

    Ditlevsen, Ove Dalager; Madsen, H. O.

    of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature......The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...

  2. Phenological Records

    National Oceanic and Atmospheric Administration, Department of Commerce — Phenology is the scientific study of periodic biological phenomena, such as flowering, breeding, and migration, in relation to climatic conditions. The few records...

  3. Reliability of lightning resistant overhead distribution lines

    Tolbert, L.M.; Cleveland, J.T.; Degenhardt, L.J.


    An assessment of the 32 year historical reliability of the 13.8 kV electrical distribution system at the Oak Ridge National Laboratory (ORNL) in Tennessee has yielded several conclusions useful In the planning of Industrial power Systems. The system configuration at ORNL has essentially remained unchanged in the last 32 years which allows a meaningful comparison of reliability trends for the plant`s eight overhead distribution lines, two of which were built in the 1960`s with lightning resistant construction techniques. Meticulous records indicating the cause, duration, and location of 135 electric outages in the plant`s distribution system have allowed a reliability assessment to be performed. The assessment clearly shows how differences in voltage construction class, length, age, and maximum elevation above a reference elevation influence the reliability of overhead power distribution lines. Comparisons are also made between the ORNL historical data and predicted failure rates from ANSI and IEEE industry surveys.

  4. Air Quality Applications Based on Space Observations: The Role of the 11 Years OMI Data Record and the Potentials for TROPOMI

    Levelt, P.; Veefkind, J. P.; Kleipool, Q.; Eskes, H.; A, R. V. D.; Mijling, B.; Tamminen, J.; Joiner, J.; Bhartia, P. K.


    In the last three decades the capabilities of measuring the atmospheric composition from space did grow tremendously with ESA's ENVISAT and NASA's Eos-Aura satellite programmes. The potential to operationally monitor the atmospheric composition, like the meteorological community is doing for the physical parameters, is now within reach. At the same time, the importance for society of operational environmental monitoring, related to the ozone layer, air quality and climate change, became apparent. The Ozone Monitoring Instrument (OMI), launched on board of NASA's EOS-Aura spacecraft in on July 15, 2004, provides unique contributions to air quality monitoring from Space. The combination of urban scale resolution (13 x 24 km2 in nadir) and daily global coverage proved to be key features for the air quality community. The OMI data is currently used for improving the air quality forecasts, for inverting high-resolution emission maps, for UV forecast and for volcanic plume warning systems for aviation. Due to its 11 year continuous operation OMI now provides the longest NO2 record from space, which is essential to understand the changes in emissions globally. In 2016 Tropospheric Monitoring Instrument (TROPOMI), will be launched on board ESA's Sentinel 5 Precursor satellite. TROPOMI will have a spatial resolution of 7x7 km2 in nadir; a more than 6 times improvement over OMI. The high spatial resolution serves two goals: (1) emissions sources can be detected with even better accuracy and (2) the number of cloud-free ground pixels will increase substantially. TROPOMI also adds additional spectral bands that allow for better cloud corrections, as well as the retrieval of carbon monoxide and methane. TROPOMI will be an important satellite mission for the Copernicus atmosphere service. TROPOMI will play a key role in the Air Quality Constellation, being the polar instruments that can link the 3 GEO UVN instruments, Sentinel 4, TEMPO and GEMS. Thus, TROPOMI can serve as a

  5. Reliable Electronic Equipment

    N. A. Nayak


    Full Text Available The reliability aspect of electronic equipment's is discussed. To obtain optimum results, close cooperation between the components engineer, the design engineer and the production engineer is suggested.

  6. Reliability prediction techniques

    Whittaker, B.; Worthington, B.; Lord, J.F.; Pinkard, D.


    The paper demonstrates the feasibility of applying reliability assessment techniques to mining equipment. A number of techniques are identified and described and examples of their use in assessing mining equipment are given. These techniques include: reliability prediction; failure analysis; design audit; maintainability; availability and the life cycle costing. Specific conclusions regarding the usefulness of each technique are outlined. The choice of techniques depends upon both the type of equipment being assessed and its stage of development, with numerical prediction best suited for electronic equipment and fault analysis and design audit suited to mechanical equipment. Reliability assessments involve much detailed and time consuming work but it has been demonstrated that the resulting reliability improvements lead to savings in service costs which more than offset the cost of the evaluation.

  7. The rating reliability calculator

    Solomon David J


    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  8. Reliability of power connections

    BRAUNOVIC Milenko


    Despite the use of various preventive maintenance measures, there are still a number of problem areas that can adversely affect system reliability. Also, economical constraints have pushed the designs of power connections closer to the limits allowed by the existing standards. The major parameters influencing the reliability and life of Al-Al and Al-Cu connections are identified. The effectiveness of various palliative measures is determined and the misconceptions about their effectiveness are dealt in detail.

  9. Multidisciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  10. Sensitivity Analysis of Component Reliability



    In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.

  11. Reliability and Validity Assessment of a Linear Position Transducer

    Manuel V. Garnacho-Castaño


    Full Text Available The objectives of the study were to determine the validity and reliability of peak velocity (PV, average velocity (AV, peak power (PP and average power (AP measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain during two resistance exercises, bench press (BP and full back squat (BS, performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2. Intraclass correlation coefficients (ICCs indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W. Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W. Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP make this device a useful tool for monitoring resistance training.

  12. Boolean networks with reliable dynamics

    Peixoto, Tiago P


    We investigated the properties of Boolean networks that follow a given reliable trajectory in state space. A reliable trajectory is defined as a sequence of states which is independent of the order in which the nodes are updated. We explored numerically the topology, the update functions, and the state space structure of these networks, which we constructed using a minimum number of links and the simplest update functions. We found that the clustering coefficient is larger than in random networks, and that the probability distribution of three-node motifs is similar to that found in gene regulation networks. Among the update functions, only a subset of all possible functions occur, and they can be classified according to their probability. More homogeneous functions occur more often, leading to a dominance of canalyzing functions. Finally, we studied the entire state space of the networks. We observed that with increasing systems size, fixed points become more dominant, moving the networks close to the frozen...


    Тамаргазін, О. А.; Національний авіаційний університет; Власенко, П. О.; Національний авіаційний університет


    Airline's operational structure for Reliability program implementation — engineering division, reliability  division, reliability control division, aircraft maintenance division, quality assurance division — was considered. Airline's Reliability program structure is shown. Using of Reliability program for reducing costs on aircraft maintenance is proposed. Рассмотрена организационная структура авиакомпании по выполнению Программы надежности - инженерный отдел, отделы по надежности авиацио...

  14. Ultra reliability at NASA

    Shapiro, Andrew A.


    Ultra reliable systems are critical to NASA particularly as consideration is being given to extended lunar missions and manned missions to Mars. NASA has formulated a program designed to improve the reliability of NASA systems. The long term goal for the NASA ultra reliability is to ultimately improve NASA systems by an order of magnitude. The approach outlined in this presentation involves the steps used in developing a strategic plan to achieve the long term objective of ultra reliability. Consideration is given to: complex systems, hardware (including aircraft, aerospace craft and launch vehicles), software, human interactions, long life missions, infrastructure development, and cross cutting technologies. Several NASA-wide workshops have been held, identifying issues for reliability improvement and providing mitigation strategies for these issues. In addition to representation from all of the NASA centers, experts from government (NASA and non-NASA), universities and industry participated. Highlights of a strategic plan, which is being developed using the results from these workshops, will be presented.

  15. Photovoltaic module reliability workshop

    Mrig, L. (ed.)


    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  16. Student Records

    Fields, Cheryl


    Another topic involving privacy has attracted considerable attention in recent months--the "student unit record" issue. The U.S. Department of Education concluded in March that it would be feasible to help address lawmakers' concerns about accountability in higher education by constructing a database capable of tracking students from institution…

  17. Record dynamics

    Robe, Dominic M.; Boettcher, Stefan; Sibani, Paolo


    -facto irreversible and become increasingly harder to achieve. Thus, a progression of record-sized dynamical barriers are traversed in the approach to equilibration. Accordingly, the statistics of the events is closely described by a log-Poisson process. Originally developed for relaxation in spin glasses...

  18. Reliability Centered Maintenance - Methodologies

    Kammerer, Catherine C.


    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  19. Gearbox Reliability Collaborative Update (Presentation)

    Sheng, S.; Keller, J.; Glinsky, C.


    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  20. System Reliability Analysis: Foundations.


    performance formulas for systems subject to pre- ventive maintenance are given. V * ~, , 9 D -2 SYSTEM RELIABILITY ANALYSIS: FOUNDATIONS Richard E...reliability in this case is V P{s can communicate with the terminal t = h(p) Sp2(((((p p)p) p)p)gp) + p(l -p)(((pL p)p)(p 2 JLp)) + p(l -p)((p(p p...For undirected networks, the basic reference is A. Satyanarayana and Kevin Wood (1982). For directed networks, the basic reference is Avinash

  1. Reliability of power output during eccentric sprint cycling.

    Brughelli, Matt; Van Leemputte, Marc


    The purpose of this study was to determine the reliability of power outputs during maximal intensity eccentric cycling over short durations (i.e., eccentric sprint cycling) on a "motor-driven" isokinetic ergometer. Fourteen physically active male subjects performed isokinetic eccentric cycling sprints at 40, 60, 80, 100, and 120 revolutions per minute (rpm) on 4 separate occasions (T1-T4). Each sprint lasted for 6 seconds, and absolute measures of mean power (MP) and peak power (PP) per revolution were recorded. Significant increases in MP and PP were observed between T1 and subsequent trials, but no significant differences were identified between T2, T3, and T4. The coefficient of variation (CV) and intraclass correlation coefficient (ICC) were calculated to reflect within-subject and between-session reliability of MP and PP at each cadence. The CV improved to below 10% for cadences of 60, 80, 100, and 120 rpm between T3 and T4, and the majority of ICC values improved to above 0.90. The remaining ICC values remained in the moderate range between T3 and T4 (i.e., 0.82-0.89). Coefficient of variation and ICC values for the 40 rpm cadence remained at unacceptable levels throughout the 4 trials and thus should be avoided in future investigations. The results of this study indicate that reliable power outputs may be obtained after 2 familiarization sessions during eccentric sprint cycling at cadences ranging from 60 to 120 rpm.

  2. HAMR Thermal Reliability via Inverse Electromagnetic Design

    Bhargava, Samarth


    Heat-Assisted Magnetic Recording (HAMR) has promise to allow for data writing in hard disks of beyond 1 Tb/in2 areal density, by temporarily heating the area of a single datum to its Curie temperature while simultaneously applying a magnetic field from a conventional electromagnet. However, the metallic optical antenna or near-field transducer (NFT) used to apply the nano-scale heating to the media may self-heat by several hundreds of degrees. With the NFT reaching such extreme temperatures, demonstrations of HAMR technology observe write-head lifetimes that are orders of magnitude less than that required for commercial product. Hence, thermal reliability of the NFT is of upmost importance. In this paper, we first derive fundamental limits on the self-heating of the NFT to drive design choices for low temperature operation. Next, we employ Inverse Electromagnetic Design software, which provides deterministic gradient-based optimization of electromagnetic structures with thousands of degrees of freedom using t...

  3. Comparative reliability of cheiloscopy and palatoscopy in human identification

    Sharma Preeti


    Full Text Available Background: Establishing a person′s identity in postmortem scenarios can be a very difficult process. Dental records, fingerprint and DNA comparisons are probably the most common techniques used in this context, allowing fast and reliable identification processes. However, under certain circumstances they cannot always be used; sometimes it is necessary to apply different and less known techniques. In forensic identification, lip prints and palatal rugae patterns can lead us to important information and help in a person′s identification. This study aims to ascertain the use of lip prints and palatal rugae pattern in identification and sex differentiation. Materials and Methods: A total of 100 subjects, 50 males and 50 females were selected from among the students of Subharti Dental College, Meerut. The materials used to record lip prints were lipstick, bond paper, cellophane tape, a brush for applying the lipstick, and a magnifying lens. To study palatal rugae, alginate impressions were taken and the dental casts analyzed for their various patterns. Results: Statistical analysis (applying Z-test for proportion showed significant difference for type I, I′, IV and V lip patterns (P < 0.05 in males and females, while no significant difference was observed for the same in the palatal rugae patterns (P > 0.05. Conclusion: This study not only showed that palatal rugae and lip prints are unique to an individual, but also that lip prints is more reliable for recognition of the sex of an individual.

  4. Reliability in individual monitoring service.

    Mod Ali, N


    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country.

  5. ATLAS Recordings

    Steven Goldfarb; Mitch McLachlan; Homer A. Neal

    Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials from 2005 until this past month are available via the University of Michigan portal here. Most recent additions include the Trigger-Aware Analysis Tutorial by Monika Wielers on March 23 and the ROOT Workshop held at CERN on March 26-27.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal.Feedback WelcomeOur group is making arrangements now to record plenary sessions, tutorials, and other important ATLAS events for 2007. Your suggestions for potential recording, as well as your feedback on existing archives is always welcome. Please contact us at Thank you.Enjoy the Lectures!

  6. Expert system aids reliability

    Johnson, A.T. [Tennessee Gas Pipeline, Houston, TX (United States)


    Quality and Reliability are key requirements in the energy transmission industry. Tennessee Gas Co. a division of El Paso Energy, has applied Gensym`s G2, object-oriented Expert System programming language as a standard tool for maintaining and improving quality and reliability in pipeline operation. Tennessee created a small team of gas controllers and engineers to develop a Proactive Controller`s Assistant (ProCA) that provides recommendations for operating the pipeline more efficiently, reliably and safely. The controller`s pipeline operating knowledge is recreated in G2 in the form of Rules and Procedures in ProCA. Two G2 programmers supporting the Gas Control Room add information to the ProCA knowledge base daily. The result is a dynamic, constantly improving system that not only supports the pipeline controllers in their operations, but also the measurement and communications departments` requests for special studies. The Proactive Controller`s Assistant development focus is in the following areas: Alarm Management; Pipeline Efficiency; Reliability; Fuel Efficiency; and Controller Development.

  7. Reliability based structural design

    Vrouwenvelder, A.C.W.M.


    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A ke

  8. Reliability based structural design

    Vrouwenvelder, A.C.W.M.


    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A ke

  9. The value of reliability

    Fosgerau, Mogens; Karlström, Anders


    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...

  10. Parametric Mass Reliability Study

    Holt, James P.


    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  11. Avionics Design for Reliability


    Consultant P.O. Box 181, Hazelwood. Missouri 63042, U.S.A. soup ""•.• • CONTENTS Page LIST OF SPEAKERS iii INTRODUCTION AND OVERVIEW-RELIABILITY UNDER... primordial , d’autant plus quo dans co cam ia procg- dure do st~lection en fiabilitg eat assez peu efficaco. La ripartition des pannes suit

  12. Wind Energy - How Reliable.


    The reliability of a wind energy system depends on the size of the propeller and the size of the back-up energy storage. Design of the optimum system...speed incidents which generate a significant part of the wind energy . A nomogram is presented, based on some continuous wind speed measurements

  13. The reliability horizon

    Visser, M


    The ``reliability horizon'' for semi-classical quantum gravity quantifies the extent to which we should trust semi-classical quantum gravity, and gives a handle on just where the ``Planck regime'' resides. The key obstruction to pushing semi-classical quantum gravity into the Planck regime is often the existence of large metric fluctuations, rather than a large back-reaction.

  14. High reliability organizations

    Gallis, R.; Zwetsloot, G.I.J.M.


    High Reliability Organizations (HRO’s) are organizations that constantly face serious and complex (safety) risks yet succeed in realising an excellent safety performance. In such situations acceptable levels of safety cannot be achieved by traditional safety management only. HRO’s manage safety

  15. When and why are reliable organizations favored?

    Ethiraj, Sendil; Yi, Sangyoon

    in this assertion. Principally, we show that whether reliable organizations are favored depends on the nature of the environment. When environments are complex, reliability is selected out. In more complex environments, variability is more valued by selection forces. Further, we also examine the consequences......In the 1980s, organization theory witnessed a decade long debate about the incentives and consequences of organizational change. Though the fountainhead of this debate was the observation that reliable organizations are the “consequence” rather than the “cause” of selection forces, much...

  16. Decision theory in structural reliability

    Thomas, J. M.; Hanagud, S.; Hawk, J. D.


    Some fundamentals of reliability analysis as applicable to aerospace structures are reviewed, and the concept of a test option is introduced. A decision methodology, based on statistical decision theory, is developed for determining the most cost-effective design factor and method of testing for a given structural assembly. The method is applied to several Saturn V and Space Shuttle structural assemblies as examples. It is observed that the cost and weight features of the design have a significant effect on the optimum decision.

  17. The reliability of commonly used arthroscopic classifications of ligamentum teres pathology.

    Devitt, Brian M; Smith, Bjorn; Stapf, Robert; Jo, Suenghwan; O'Donnell, John M


    The importance of the ligamentum teres (LT) in the hip is increasingly being recognized. However, the incidence of LT tears in the literature is extremely variable. Although classification systems exist their reliability in classifying LT pathology arthroscopically has not been well defined. To determine the inter- and intra-observer reliability of two existing classifications systems for the diagnosis of LT pathology at hip arthroscopy. Second, to identify key pathological findings currently not included. Four experienced hip-arthroscopists reviewed 40 standardized arthroscopic videos. Arthroscopic findings of the LT were classified using the Gray and Villar (G&V) and descriptive classification (DC). Reviewers were asked to record other relevant pathology encountered. Inter- and intra-observer reliability was defined using Fleiss-Kappa and Cohen-Kappa statistics. Both classifications demonstrated fair inter-observer reliability. The intra-observer reliability for G&V was moderate-to-substantial and for DC was slight-to-moderate. An absolute agreement rate of 10% (G&V) and 37.5% (DC) was found. Differentiation between normal, and partial or low-grade tears was a common source of disagreement. The prevalence of LT pathology was 90%. Synovitis was the most common diagnostic finding that was not included in either classification system used in this study. Arthroscopic classification of LT pathology using the G&V and the DC demonstrated only fair inter-observer reliability. The major discrepancy in interpretation was between normal, and partial or low-grade tears. The presence of synovitis was not in either classification but was considered an important arthroscopic finding. Thorough arthroscopic scrutiny reveals the prevalence of LT pathology is higher than previously reported.

  18. Reliability in the utility computing era: Towards reliable Fog computing

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.


    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  19. Scaled CMOS Technology Reliability Users Guide

    White, Mark


    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  20. The Impact of the Revised Sunspot Record on Solar Irradiance Reconstructions

    Kopp, G; Lean, J; Wu, C J


    Reliable historical records of total solar irradiance (TSI) are needed for climate change attribution and research to assess the extent to which long-term variations in the Sun's radiant energy incident on the Earth may exacerbate (or mitigate) the more dominant warming in recent centuries due to increasing concentrations of greenhouse gases. We investigate potential impacts of the new Sunspot Index and Long-term Solar Observations (SILSO) sunspot-number time series on model reconstructions of TSI. In contemporary TSI records, variations on time scales longer than about a day are dominated by the opposing effects of sunspot darkening and facular brightening. These two surface magnetic features, retrieved either from direct observations or from solar activity proxies, are combined in TSI models to reproduce the current TSI observational record. Indices that manifest solar-surface magnetic activity, in particular the sunspot-number record, then enable the reconstruction of historical TSI. Revisions to the sunsp...

  1. Preliminary Analysis of Mobile Observation Records for the Aftershocks in the Yushu M7.1 Earthquake%玉树7.1级地震强震动流动观测记录初步分析

    田秀丰; 张璇; 姚凯; 张晓芳; 蒲举


    2010年4月14日青海省玉树藏族自治州玉树县发生 M7.1地震。震后在地震现场架设了7台强震动流动观测仪,记录了大量强震动记录。本文收集整理了该次流动观测所获得的强震动记录,对典型记录进行了初步处理,并对记录特点和相关问题进行了分析与讨论。%The Yushu M7.1 earthquake occurred on April 14,2010 in Yushu,the Tibetan autono-mous prefecture of the Qinghai province (E 96.7°,N 33.1°).After the earthquake,we set up seven strong motion mobile observation instruments around the epicenter.As of October 15, 2010,we had captured 71 seismic events and a total of 213 acceleration records,with the maximum magnitude of M4.6.In these records,there were 10 records whose peak ground acceleration was greater than 30 gal,and the maximum peak ground acceleration was 122 gal.The waveform of re-cords was clear and complete,which makes up for the lack of fixed stations and local records near the earthquake zone.It not only offered the quantitative evidence for analysis of the earthquake damage,but also provided important data for the study of the relationship among the peak ground motion,holding time,the spectrum,and macro-seismic intensity.In this paper,we collected and preliminarily processed these records and obtained the speed schedule,Fourier spectrum,and the power spectrum,then analyzed and discussed the recording features and related issues.The results showed that (1)the maximum peak ground acceleration in this mobile observation was 122 gal, and the holding time of vibration was about 5 seconds,with a frequency of 7.2 Hz.This record came from a M3.6 aftershock,and it indicated that a small earthquake may also obtain high peak acceleration.This would appear to be the opposite phenomenon of a high vibration peak with low seismic intensity.In fact,there were many major projects located in the area which had small seismicity background.With the increasing coverage of observation stations

  2. The reliability of manual reporting of clinical events in an anesthesia information management system (AIMS).

    Simpao, Allan F; Pruitt, Eric Y; Cook-Sather, Scott D; Gurnaney, Harshad G; Rehman, Mohamed A


    Manual incident reports significantly under-report adverse clinical events when compared with automated recordings of intraoperative data. Our goal was to determine the reliability of AIMS and CQI reports of adverse clinical events that had been witnessed and recorded by research assistants. The AIMS and CQI records of 995 patients aged 2-12 years were analyzed to determine if anesthesia providers had properly documented the emesis events that were observed and recorded by research assistants who were present in the operating room at the time of induction. Research assistants recorded eight cases of emesis during induction that were confirmed with the attending anesthesiologist at the time of induction. AIMS yielded a sensitivity of 38 % (95 % confidence interval [CI] 8.5-75.5 %), while the sensitivity of CQI reporting was 13 % (95 % CI 0.3-52.7 %). The low sensitivities of the AIMS and CQI reports suggest that user-reported AIMS and CQI data do not reliably include significant clinical events.

  3. Human Reliability Program Workshop

    Landers, John; Rogers, Erin; Gerke, Gretchen


    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  4. Accelerator reliability workshop

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D


    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  5. Reliability and construction control

    Sherif S. AbdelSalam


    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  6. Improving Power Converter Reliability

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon


    The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  7. Power electronics reliability.

    Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley


    The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

  8. Record club

    Record club


      Bonjour a tous, Voici les 24 nouveaux DVD de Juillet disponibles depuis quelques jours, sans oublier les 5 CD Pop musique. Découvrez la saga du terroriste Carlos, la vie de Gainsbourg et les aventures de Lucky Luke; angoissez avec Paranormal Activity et évadez vous sur Pandora dans la peau d’Avatar. Toutes les nouveautés sont à découvrir directement au club. Pour en connaître la liste complète ainsi que le reste de la collection du Record Club, nous vous invitons sur notre site web: Toutes les dernières nouveautés sont dans la rubrique « Discs of the Month ». Rappel : le club est ouvert les Lundis, Mercredis, Vendredis de 12h30 à 13h00 au restaurant n°2, bâtiment 504. A bientôt chers Record Clubbers.  

  9. Record Club

    Record Club

    2011-01-01 November  Selections Just in time for the holiday season, we have added a number of new CDs and DVDs into the Club. You will find the full lists at; select the "Discs of the Month" button on the left side on the left panel of the web page and then Nov 2011. New films include the all 5 episodes of Fast and Furious, many of the most famous films starring Jean-Paul Belmondo and those of Louis de Funes and some more recent films such as The Lincoln Lawyer and, according to some critics, Woody Allen’s best film for years – Midnight in Paris. For the younger generation there is Cars 2 and Kung Fu Panda 2. New CDs include the latest releases by Adele, Coldplay and the Red Hot Chili Peppers. We have also added the new Duets II CD featuring Tony Bennett singing with some of today’s pop stars including Lady Gaga, Amy Winehouse and Willy Nelson. The Club is now open every Monday, Wednesday and Friday ...

  10. ATLAS Recordings

    Jeremy Herr; Homer A. Neal; Mitch McLachlan

    The University of Michigan Web Archives for the 2006 ATLAS Week Plenary Sessions, as well as the first of 2007, are now online. In addition, there are a wide variety of Software and Physics Tutorial sessions, recorded over the past couple years, to chose from. All ATLAS-specific archives are accessible here.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal. Shaping Collaboration 2006The Michigan group is happy to announce a complete set of recordings from the Shaping Collaboration conference held last December at the CICG in Geneva.The event hosted a mix of Collaborative Tool experts and LHC Users, and featured presentations by the CERN Deputy Director General, Prof. Jos Engelen, the President of Internet2, and chief developers from VRVS/EVO, WLAP, and other tools...

  11. Record Club

    Record Club

    2011-01-01 Nouveautés été 2011 Le club de location de CDs et de DVDs vient d’ajouter un grand nombre de disques pour l’été 2011. Parmi eux, Le Discours d’un Roi, oscar 2011 du meilleur film et Harry Potter les reliques de la mort (1re partie). Ce n’est pas moins de 48 DVDs et 10 CDs nouveaux qui vous sont proposés à la location. Il y en a pour tous les genres. Alors n’hésitez pas à consulter notre site, voir Disc Catalogue, Discs of the month pour avoir la liste complète. Le club est ouvert tous les Lundi, Mercredi, Vendredi de 12h30 à 13h dans le bâtiment du restaurent N°2 (Cf. URL: A très bientôt.  

  12. Record Club

    Record Club

    2011-01-01 June Selections We have put a significant number of new CDs and DVDs into the Club You will find the full lists at and select the «Discs of the Month» button on the left side on the left panel of the web page and then June 2011. New films include the latest Action, Suspense and Science Fiction film hits, general drama movies including the Oscar-winning The King’s Speech, comedies including both chapter of Bridget Jones’s Diary, seven films for children and a musical. Other highlights include the latest Harry Potter release and some movies from the past you may have missed including the first in the Terminator series. New CDs include the latest releases by Michel Sardou, Mylene Farmer, Jennifer Lopez, Zucchero and Britney Spears. There is also a hits collection from NRJ. Don’t forget that the Club is now open every Monday, Wednesday and Friday lunchtimes from 12h30 to 13h00 in Restaurant 2, Building 504. (C...

  13. ATLAS reliability analysis

    Bartsch, R.R.


    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  14. Electronic medical record: Time to migrate?

    Rustagi, Neeti; Singh, Ritesh


    Gone are the days when records of patients were kept in paper format. Majority of things going digital, it is inevitable that hospitals will adopt electronic medical record in near future. It is simple, reliable and cost effective in long term.

  15. Electronic medical record: Time to migrate?

    Neeti Rustagi


    Full Text Available Gone are the days when records of patients were kept in paper format. Majority of things going digital, it is inevitable that hospitals will adopt electronic medical record in near future. It is simple, reliable and cost effective in long term.

  16. Cine recording ophthalmoscope

    Fitzgerald, J. W.


    Camera system provides accurate photographic recording during acceleration of centrifuge and permits immediate observation of dynamic changes in retinal circulation by a closed-circuit television loop. System consists of main camera, remote control unit, and strobe power supply unit, and is used for fluorescein studies and dynamometry sequences.

  17. A Missing Link in the Evolution of the Cumulative Recorder

    Asano, Toshio; Lattal, Kennon A.


    A recently recovered cumulative recorder provides a missing link in the evolution of the cumulative recorder from a modified kymograph to a reliably operating, scientifically and commercially successful instrument. The recorder, the only physical evidence of such an early precommercial cumulative recorder yet found, was sent to Keio University in…


    Record Club


    DVD James Bond – Series Complete To all Record Club Members, to start the new year, we have taken advantage of a special offer to add copies of all the James Bond movies to date, from the very first - Dr. No - to the latest - Quantum of Solace. No matter which of the successive 007s you prefer (Sean Connery, George Lazenby, Roger Moore, Timothy Dalton, Pierce Brosnan or Daniel Craig), they are all there. Or perhaps you have a favourite Bond Girl, or even perhaps a favourite villain. Take your pick. You can find the full selection listed on the club web site; use the panel on the left of the page “Discs of the Month” and select Jan 2010. We remind you that we are open on Mondays, Wednesdays and Fridays from 12:30 to 13:00 in Restaurant 2 (Bldg 504).

  19. Record breakers

    Antonella Del Rosso


    In the sixties, CERN’s Fellows were but a handful of about 50 young experimentalists present on site to complete their training. Today, their number has increased to a record-breaking 500. They come from many different fields and are spread across CERN’s different activity areas.   “Diversifying the Fellowship programme has been the key theme in recent years,” comments James Purvis, Head of the Recruitment, Programmes and Monitoring group in the HR Department. “In particular, the 2005 five-yearly review introduced the notion of ‘senior’ and ‘junior’ Fellowships, broadening the target audience to include those with Bachelor-level qualifications.” Diversification made CERN’s Fellowship programme attractive to a wider audience but the number of Fellows on site could not have increased so much without the support of EU-funded projects, which were instrumental in the growth of the programme. ...

  20. Continuous Reliability Enhancement for Wind (CREW) database :

    Hines, Valerie Ann-Peters; Ogilvie, Alistair B.; Bond, Cody R.


    To benchmark the current U.S. wind turbine fleet reliability performance and identify the major contributors to component-level failures and other downtime events, the Department of Energy funded the development of the Continuous Reliability Enhancement for Wind (CREW) database by Sandia National Laboratories. This report is the third annual Wind Plant Reliability Benchmark, to publically report on CREW findings for the wind industry. The CREW database uses both high resolution Supervisory Control and Data Acquisition (SCADA) data from operating plants and Strategic Power Systems ORAPWindª (Operational Reliability Analysis Program for Wind) data, which consist of downtime and reserve event records and daily summaries of various time categories for each turbine. Together, these data are used as inputs into CREWs reliability modeling. The results presented here include: the primary CREW Benchmark statistics (operational availability, utilization, capacity factor, mean time between events, and mean downtime); time accounting from an availability perspective; time accounting in terms of the combination of wind speed and generation levels; power curve analysis; and top system and component contributors to unavailability.

  1. On the reliability of frequency components in systolic arterial pressure in patients with atrial fibrillation.

    Corino, Valentina D A; Lombardi, Federico; Mainardi, Luca T


    Atrial fibrillation (AF) is characterized by desynchronization of atrial electrical activity causing a consequent irregular ventricular response. In AF, the beat-to-beat variation of blood pressure is increased because of variations in filling time and contractility. However, only a few studies have analyzed short-term blood pressure variations in AF, and we have recently observed a harmonic low-frequency (LF) component in systolic arterial pressure (SAP) during AF. Aim of the present study is to propose a method to verify the reliability of the spectral component found in SAP series, based on the position of the poles of the autoregressive spectral decomposition in the z-plane. In particular, 1,000 random permutations of the series allowed the definition of an area in the z-plane where poles from random process are likely to occur. Poles lying outside this area are considered as reliable oscillations. We tested the method on 53 recordings obtained at rest from patients with persistent AF. LF component was found in, respectively, 51 and 43 recordings in SAP and RR series. High-frequency (HF) component was found in all the recordings for both SAP and RR series. Using the proposed test, the percentage of reliable components in LF and HF bands was 80 and 38 in SAP series, and 20 and 18 in RR series. We concluded that, at variance with RR ones, SAP LF components are likely to represent true physiological oscillations.

  2. Observer Use of Standardized Observation Protocols in Consequential Observation Systems

    Bell, Courtney A.; Yi, Qi; Jones, Nathan D.; Lewis, Jennifer M.; McLeod, Monica; Liu, Shuangshuang


    Evidence from a handful of large-scale studies suggests that although observers can be trained to score reliably using observation protocols, there are concerns related to initial training and calibration activities designed to keep observers scoring accurately over time (e.g., Bell, et al, 2012; BMGF, 2012). Studies offer little insight into how…

  3. CR reliability testing

    Honeyman-Buck, Janice C.; Rill, Lynn; Frost, Meryll M.; Staab, Edward V.


    The purpose of this work was to develop a method for systematically testing the reliability of a CR system under realistic daily loads in a non-clinical environment prior to its clinical adoption. Once digital imaging replaces film, it will be very difficult to revert back should the digital system become unreliable. Prior to the beginning of the test, a formal evaluation was performed to set the benchmarks for performance and functionality. A formal protocol was established that included all the 62 imaging plates in the inventory for each 24-hour period in the study. Imaging plates were exposed using different combinations of collimation, orientation, and SID. Anthropomorphic phantoms were used to acquire images of different sizes. Each combination was chosen randomly to simulate the differences that could occur in clinical practice. The tests were performed over a wide range of times with batches of plates processed to simulate the temporal constraints required by the nature of portable radiographs taken in the Intensive Care Unit (ICU). Current patient demographics were used for the test studies so automatic routing algorithms could be tested. During the test, only three minor reliability problems occurred, two of which were not directly related to the CR unit. One plate was discovered to cause a segmentation error that essentially reduced the image to only black and white with no gray levels. This plate was removed from the inventory to be replaced. Another problem was a PACS routing problem that occurred when the DICOM server with which the CR was communicating had a problem with disk space. The final problem was a network printing failure to the laser cameras. Although the units passed the reliability test, problems with interfacing to workstations were discovered. The two issues that were identified were the interpretation of what constitutes a study for CR and the construction of the look-up table for a proper gray scale display.

  4. Ultimately Reliable Pyrotechnic Systems

    Scott, John H.; Hinkel, Todd


    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing

  5. Ferrite logic reliability study

    Baer, J. A.; Clark, C. B.


    Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

  6. Blade reliability collaborative :

    Ashwill, Thomas D.; Ogilvie, Alistair B.; Paquette, Joshua A.


    The Blade Reliability Collaborative (BRC) was started by the Wind Energy Technologies Department of Sandia National Laboratories and DOE in 2010 with the goal of gaining insight into planned and unplanned O&M issues associated with wind turbine blades. A significant part of BRC is the Blade Defect, Damage and Repair Survey task, which will gather data from blade manufacturers, service companies, operators and prior studies to determine details about the largest sources of blade unreliability. This report summarizes the initial findings from this work.

  7. Record dynamics

    Robe, Dominic M.; Boettcher, Stefan; Sibani, Paolo


    When quenched rapidly beyond their glass transition, colloidal suspensions fall out of equilibrium. The pace of their dynamics then slows down with the system age, i.e., with the time elapsed after the quench. This breaking of time translational invariance is associated with dynamical observables...

  8. The 2014 high record of Antarctic sea ice extent

    Massonnet, Francois; Guemas, Virginie; Fuckar, Neven; Doblas-Reyes, Francisco


    In September 2014, Antarctic sea ice extent exceeded the symbolic level of 20 million km²for the first time since 1978, when reliable satellite measurements became available. After the successive records of 2012 and 2013, sea ice extent in 2014 once again reinforced the positive trend observed since the late 1970s. We conduct here a dedicated study to elucidate the origins of a major, and perhaps the most intriguing, event that happened at our Poles recently. Observations, reanalyses and model results all point towards the important role of winds in modifying near-surface heat advection patterns around Antarctica. The role of pre-conditioning (summer conditions) is found to be of lesser importance. Finally, we find no evidence that anomalous freshwater forcing (from atmospheric or continental origin) could have explained the record extent of 2014.

  9. Record Club

    Record Club


      March  Selections By the time this appears, we will have added a number of new CDs and DVDs into the Club. You will find the full lists at; select the "Discs of the Month" button on the left panel of the web page and then Mar 2012. New films include recent releases such as Johnny English 2, Bad Teacher, Cowboys vs Aliens, and Super 8. We are also starting to acquire some of the classic films we missed when we initiated the DVD section of the club, such as appeared in a recent Best 100 Films published by a leading UK magazine; this month we have added Spielberg’s Jaws and Scorsese’s Goodfellas. If you have your own ideas on what we are missing, let us know. For children we have no less than 8 Tin-Tin DVDs. And if you like fast moving pop music, try the Beyonce concert DVD. New CDs include the latest releases from Paul McCartney, Rihanna and Amy Winehouse. There is a best of Mylene Farmer, a compilation from the NRJ 201...

  10. Measuring Fidelity and Adaptation: Reliability of a Instrument for School-Based Prevention Programs.

    Bishop, Dana C; Pankratz, Melinda M; Hansen, William B; Albritton, Jordan; Albritton, Lauren; Strack, Joann


    There is a need to standardize methods for assessing fidelity and adaptation. Such standardization would allow program implementation to be examined in a manner that will be useful for understanding the moderating role of fidelity in dissemination research. This article describes a method for collecting data about fidelity of implementation for school-based prevention programs, including measures of adherence, quality of delivery, dosage, participant engagement, and adaptation. We report about the reliability of these methods when applied by four observers who coded video recordings of teachers delivering All Stars, a middle school drug prevention program. Interrater agreement for scaled items was assessed for an instrument designed to evaluate program fidelity. Results indicated sound interrater reliability for items assessing adherence, dosage, quality of teaching, teacher understanding of concepts, and program adaptations. The interrater reliability for items assessing potential program effectiveness, classroom management, achievement of activity objectives, and adaptation valences was improved by dichotomizing the response options for these items. The item that assessed student engagement demonstrated only modest interrater reliability and was not improved through dichotomization. Several coder pairs were discordant on items that overall demonstrated good interrater reliability. Proposed modifications to the coding manual and protocol are discussed.

  11. Load Control System Reliability

    Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)


    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  12. Supply chain reliability modelling

    Eugen Zaitsev


    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  13. Tele-dietetics with food images as dietary intake record in nutrition assessment.

    Chung, Lousia Ming Yan; Chung, Joanne Wai Yee


    Tele-dietetic is not common in current practice. To promote this, food images as dietary intake records should be validated before application. Two-dimensional (2D) digital images are reliable and have been validated in previous studies. However, the depth is virtual with a 2D image. And ingredient types, sauce types, cooking methods, and amount of oil added have not been researched for accurate food analysis. This study was designed to compare the reliability and accuracy of these parameters estimated using 2D and three-dimensional (3D) food images. Ten nutritionists evaluated 10 selected food items between January 2008 and June 2008. Ten 2D food images and ten 3D images of the same food items were captured for the observers' evaluation. The actual weightings or volume of the food items were measured as the gold standard for comparisons. Intraclass correlations (ICCs), percentage agreement, and one-sample t-tests were analyzed to compare the reliability and accuracy of each type of images. Both images showed high reliability among observers with 3D images giving less variance (2D: ICC=0.916, F=17.001, ptype. 2D images provided better volume and oil estimation when compared with 3D images. The research findings confirmed the application of 2D and 3D food images as reliable and accurate dietary records for nutritionists to evaluate clients' dietary habits. This implied the feasibility of tele-dietetics that one's nutrition status could be assessed over the Internet.

  14. OSS reliability measurement and assessment

    Yamada, Shigeru


    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  15. Reliability and validity in research.

    Roberts, Paula; Priest, Helena

    This article examines reliability and validity as ways to demonstrate the rigour and trustworthiness of quantitative and qualitative research. The authors discuss the basic principles of reliability and validity for readers who are new to research.

  16. Optimization by record dynamics

    Barettin, Daniele; Sibani, Paolo


    Large dynamical changes in thermalizing glassy systems are triggered by trajectories crossing record sized barriers, a behavior revealing the presence of a hierarchical structure in configuration space. The observation is here turned into a novel local search optimization algorithm dubbed record...... dynamics optimization,or RDO. RDO uses the Metropolis rule to accept or reject candidate solutions depending on the value of a parameter akin to the temperature and minimizes the cost function of the problem at hand through cycles where its ‘temperature’ is raised and subsequently decreased in order...... to expediently generate record high (and low) values of the cost function. Below, RDO is introduced and then tested by searching for the ground state of the Edwards–Anderson spin-glass model, in two and three spatial dimensions. A popularand highly efficient optimization algorithm, parallel tempering (PT...

  17. Reliability and Its Quantitative Measures

    Alexandru ISAIC-MANIU


    Full Text Available In this article is made an opening for the software reliability issues, through wide-ranging statistical indicators, which are designed based on information collected from operating or testing (samples. It is developed the reliability issues also for the case of the main reliability laws (exponential, normal, Weibull, which validated for a particular system, allows the calculation of some reliability indicators with a higher degree of accuracy and trustworthiness

  18. 2017 NREL Photovoltaic Reliability Workshop

    Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)


    NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.

  19. Testing for PV Reliability (Presentation)

    Kurtz, S.; Bansal, S.


    The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

  20. Reliable Quantum Computers

    Preskill, J


    The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)

  1. Effects of image enhancement on reliability of landmark identification in digital cephalometry

    M Oshagh


    Full Text Available Introduction: Although digital cephalometric radiography is gaining popularity in orthodontic practice, the most important source of error in its tracing is uncertainty in landmark identification. Therefore, efforts to improve accuracy in landmark identification were directed primarily toward the improvement in image quality. One of the more useful techniques of this process involves digital image enhancement which can increase overall visual quality of image, but this does not necessarily mean a better identification of landmarks. The purpose of this study was to evaluate the effectiveness of digital image enhancements on reliability of landmark identification. Materials and Methods: Fifteen common landmarks including 10 skeletal and 5 soft tissues were selected on the cephalograms of 20 randomly selected patients, prepared in Natural Head Position (NHP. Two observers (orthodontists identified landmarks on the 20 original photostimulable phosphor (PSP digital cephalogram images and 20 enhanced digital images twice with an intervening time interval of at least 4 weeks. The x and y coordinates were further analyzed to evaluate the pattern of recording differences in horizontal and vertical directions. Reliability of landmarks identification was analyzed by paired t test. Results: There was a significant difference between original and enhanced digital images in terms of reliability of points Ar and N in vertical and horizontal dimensions, and enhanced images were significantly more reliable than original images. Identification of A point, Pogonion and Pronasal points, in vertical dimension of enhanced images was significantly more reliable than original ones. Reliability of Menton point identification in horizontal dimension was significantly more in enhanced images than original ones. Conclusion: Direct digital image enhancement by altering brightness and contrast can increase reliability of some landmark identification and this may lead to more

  2. Reliability of heart period and systolic arterial pressure variabilities in women with fibromyalgia syndrome.

    Andrade, Carolina Pieroni; Zamunér, Antonio Roberto; Forti, Meire; de França, Thalita Fonseca; da Silva, Ester


    The aim of this study is to define absolute and relative reliability of spectral indices of cardiovascular autonomic control in the supine position in women with fibromyalgia syndrome (FMS). Twenty-three women with FMS (age 48 ± 7 years) took part in the study. ECG, finger blood pressure, and respiration were continuously recorded in all participants at rest in baseline 1 (BL1) and after 15 days from BL1 (BL2). The power spectrum analysis provided two oscillatory components: low frequency (LF, 0.04-0.15 Hz) and high frequency (HF, 0.15-0.4 Hz) from the heart period (HP) variability and the LF oscillatory component from SAP variability (LFSAP). Absolute and relative reliability were rated by 95 % of the limit of random variation and intraclass correlation coefficient (ICC), respectively. No significant differences were observed between BL1 and BL2 for the spectral indices of HP and SAP variabilities. The 95 % limit of the random variation of these indices indicated that the values of repeated measurements were between 22 % higher and 0.2 % lower (more reliable parameter; average of HP variability) and 912.9 % higher and 0.2 % lower (less reliable parameter; LFSAP) than BL1. Conversely, the index of relative reliability (ICC) ranged from 0.23 to 0.70 indicating a good reliability. The spectral indices of cardiovascular autonomic control in women with FMS seem to present good relative reliability. Therefore, these indices can be useful as parameters to quantify if a variation was consistent and accurate in the retest besides adding crucial information for clinical research and clinical evaluation of FMS patients.

  3. Accuracy of the electronic patient record in a first opinion veterinary practice.

    Jones-Diette, Julie; Robinson, Natalie J; Cobb, Malcolm; Brennan, Marnie L; Dean, Rachel S


    The use of electronic patient records (EPRs) in veterinary research is becoming more common place. To date no-one has investigated how accurately and completely they represent the clinical interactions that happen between veterinary professionals, and their clients and patients. The aim of this study was to compare data extracted from consultations within EPRs with data gathered by direct observation of the same consultation. A secondary aim was to establish the inter-rater reliability of two researchers who examined the data extracted from the EPRs. A convenience sample of 36 small animal consultations undertaken by 2 veterinary surgeons (83% by one veterinary surgeon) at a mixed veterinary practice in the United Kingdom was studied. All 36 consultations were observed by a single researcher using a standardised data collection tool. The information recorded in the EPRs was extracted from the Practice Management Software (PMS) systems using a validated XML schema. The XML extracted data was then converted into the same format as the observed data by two independent researchers who examined the extracted information and recorded their findings using the same tool as for the observation. The issues discussed and any action taken relating to those problems recorded in the observed and extracted datasets were then compared. In addition the inter-rater reliability of the two researchers who examined the extracted data was assessed. Only 64.4% of the observed problems discussed during the consultations were recorded in the EPR. The type of problem, who raised the problem and at what point in the consultation the problem was raised significantly affected whether the problem was recorded or not in the EPR. Only 58.3% of observed actions taken during the consultations were recorded in the EPR and the type of action significantly affected whether it would be recorded or not. There was moderate agreement between the two researchers who examined the extracted data. This is the

  4. Electronics reliability calculation and design

    Dummer, Geoffrey W A; Hiller, N


    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  5. Hybrid reliability model for fatigue reliability analysis of steel bridges

    曹珊珊; 雷俊卿


    A kind of hybrid reliability model is presented to solve the fatigue reliability problems of steel bridges. The cumulative damage model is one kind of the models used in fatigue reliability analysis. The parameter characteristics of the model can be described as probabilistic and interval. The two-stage hybrid reliability model is given with a theoretical foundation and a solving algorithm to solve the hybrid reliability problems. The theoretical foundation is established by the consistency relationships of interval reliability model and probability reliability model with normally distributed variables in theory. The solving process is combined with the definition of interval reliability index and the probabilistic algorithm. With the consideration of the parameter characteristics of theS−N curve, the cumulative damage model with hybrid variables is given based on the standards from different countries. Lastly, a case of steel structure in the Neville Island Bridge is analyzed to verify the applicability of the hybrid reliability model in fatigue reliability analysis based on the AASHTO.

  6. Synthesis of Reliable Telecommunication Networks

    Dusan Trstensky


    Full Text Available In many application, the network designer may to know to senthesise a reliable telecommunication network. Assume that a network, denoted Gm,e has the number of nodes n and the number of edges e, and the operational probability of each edge is known. The system reliability of the network is defined to be the reliability that every pair of nodes can communicate with each other. A network synthesis problem considered in this paper is to find a network G*n,e, that maximises system reliability over the class of all networks for the classes of networks Gn,n-1, Gn,m and Gn,n+1 respectively. In addition an upper bound of maximum reliability for the networks with n-node and e-edge (e>n+2 is derived in terms of node. Computational experiments for the reliability upper are also presented. the results show, that the proposed reliability upper bound is effective.

  7. Mathematical reliability an expository perspective

    Mazzuchi, Thomas; Singpurwalla, Nozer


    In this volume consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. Topics like finance, forensics, information, and orthopedics, as well as the more traditional reliability topics were purposefully undertaken to make this collection different from the existing books in reliability. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The seven parts are networks and systems; recurrent events; information and design; failure rate function and burn-in; software reliability and random environments; reliability in composites and orthopedics, and reliability in finance and forensics. Embedded within the above are some of the other currently active topics such as causality, cascading, exchangeability, expert testimony, hierarchical modeling, optimization and survival...

  8. Reliability databases: State-of-the-art and perspectives

    Akhmedjanov, Farit


    The report gives a history of development and an overview of the existing reliability databases. This overview also describes some other (than computer databases) sources of reliability and failures information, e.g. reliability handbooks, but the mainattention is paid to standard models and soft......The report gives a history of development and an overview of the existing reliability databases. This overview also describes some other (than computer databases) sources of reliability and failures information, e.g. reliability handbooks, but the mainattention is paid to standard models...... and software packages containing the data mentioned. The standards corresponding to collection and exchange of reliability data are observed too. Finally, perspective directions in such data sources development areshown....

  9. Reliability Degradation Due to Stockpile Aging

    Robinson, David G.


    The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess

  10. Validade e confiabilidade intra e interexaminadores da Escala Observacional de Marcha para crianças com paralisia cerebral espástica Validity and intra- and inter-rater reliability of the Observational Gait Scale for children with spastic cerebral palsy

    PA Araújo


    Full Text Available CONTEXTUALIZAÇÃO: A avaliação observacional da marcha é uma abordagem clínica importante para a avaliação das desordens da marcha. Sistemas de análise quantitativa da marcha oferecem informações acuradas, entretanto o alto custo desses instrumentos tornam a análise observacional mais acessível para a prática clínica. OBJETIVOS: Desenvolver uma escala observacional de marcha (EOM para caracterizar a marcha de crianças com paralisia cerebral espástica (PCE e testar sua confiabilidade e validade de critério, comparando-a com o sistema computadorizado de análise de movimento, padrão ouro para avaliação cinemática da marcha. MÉTODOS: Vinte e três vídeos de crianças com PCE (9,54±2,22 anos foram avaliados por meio da EOM por quatro fisioterapeutas em duas sessões. Dados cinemáticos do complexo tornozelo/pé, joelho, quadril e pelve foram obtidos usando o sistema de análise de movimento Qualisys Pro-reflex. Para estabelecer a validade de critério e a confiabilidade intra e interexaminadores, os resultados obtidos da EOM foram comparados com os dados do sistema de análise de movimento, entre as duas sessões e entre examinadores. Teste Kappa ponderado foi aplicado para analisar a concordância entre as avaliações. RESULTADOS: A EOM apresentou validade muito boa para joelho (r=0,64, pBACKGROUND: Observational gait assessment is an important clinical approach to the evaluation of gait disorders. Quantitative gait analysis systems provide accurate information, but the high cost of these instruments makes observational analysis more affordable to clinical practice. OBJECTIVES: To develop an observational gait scale (OGS for characterizing the gait of children with spastic cerebral palsy (SCP and to evaluate its validity and reliability criteria in comparison with a computerized motion analysis system representing the gold standard for kinematic gait assessment. METHODS: Twenty-three videos of children with SCP (9.54 ± 2

  11. Observational $\\Delta\

    Hernández, Antonio García; Monteiro, Mário J P F G; Suárez, Juan Carlos; Reese, Daniel R; Pascual-Granado, Javier; Garrido, Rafael


    Delta Scuti ($\\delta$ Sct) stars are intermediate-mass pulsators, whose intrinsic oscillations have been studied for decades. However, modelling their pulsations remains a real theoretical challenge, thereby even hampering the precise determination of global stellar parameters. In this work, we used space photometry observations of eclipsing binaries with a $\\delta$ Sct component to obtain reliable physical parameters and oscillation frequencies. Using that information, we derived an observational scaling relation between the stellar mean density and a frequency pattern in the oscillation spectrum. This pattern is analogous to the solar-like large separation but in the low order regime. We also show that this relation is independent of the rotation rate. These findings open the possibility of accurately characterizing this type of pulsator and validate the frequency pattern as a new observable for $\\delta$ Sct stars.

  12. Assessment of pruritus intensity: prospective study on validity and reliability of the visual analogue scale, numerical rating scale and verbal rating scale in 471 patients with chronic pruritus.

    Phan, Ngoc Quan; Blome, Christine; Fritz, Fleur; Gerss, Joachim; Reich, Adam; Ebata, Toshiya; Augustin, Matthias; Szepietowski, Jacek C; Ständer, Sonja


    The most commonly used tool for self-report of pruritus intensity is the visual analogue scale (VAS). Similar tools are the numerical rating scale (NRS) and verbal rating scale (VRS). In the present study, initiated by the International Forum for the Study of Itch assessing reliability of these tools, 471 randomly selected patients with chronic itch (200 males, 271 females, mean age 58.44 years) recorded their pruritus intensity on VAS (100-mm line), NRS (0-10) and VRS (four-point) scales. Re-test reliability was analysed in a subgroup of 250 patients after one hour. Statistical analysis showed a high reliability and concurrent validity (r>0.8; pscales showed a high correlation. In conclusion, high reliability and concurrent validity was found for VAS, NRS and VRS. On re-test, higher correlation and less missing values were observed. A training session before starting a clinical trial is recommended.

  13. Process control using reliability based control charts

    J.K. Jacob


    Full Text Available Purpose: The paper presents the method to monitor the mean time between failures (MTBF and detect anychange in intensity parameter. Here, a control chart procedure is presented for process reliability monitoring.Control chart based on different distributions are also considered and were used in decision making. Results anddiscussions are presented based on the case study at different industries.Design/methodology/approach: The failure occurrence process can be modeled by different distributions likehomogeneous Poisson process, Weibull model etc. In each case the aim is to monitor the mean time betweenfailure (MTBF and detect any change in intensity parameter. When the process can be described by a Poissonprocess the time between failures will be exponential and can be used for reliability monitoring.Findings: In this paper, a new procedure based on the monitoring of time to observe r failures is also proposedand it can be more appropriate for reliability monitoring.Practical implications: This procedure is useful and more sensitive when compared with the λ-chart although itwill wait until r failures for a decision. These charts can be regarded as powerful tools for reliability monitoring.λr gives more accurate results than λ-chart.Originality/value: Adopting these measures to system of equipments can increase the reliability and availabilityof the system results in economic gain. A homogeneous Poisson process is usually used to model the failureoccurrence process with certain intensity.

  14. Reliability Assessment Of Wind Turbines

    Sørensen, John Dalsgaard


    Reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources. Therefore the turbine components should be designed to have sufficient reliability but also not be too costly (and safe). This paper presents models...... for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades, substructure and foundation. But since the function of a wind turbine is highly dependent on many electrical and mechanical components as well as a control system also reliability aspects...... of these components are discussed and it is described how there reliability influences the reliability of the structural components. Two illustrative examples are presented considering uncertainty modeling, reliability assessment and calibration of partial safety factors for structural wind turbine components exposed...

  15. Nuclear weapon reliability evaluation methodology

    Wright, D.L. [Sandia National Labs., Albuquerque, NM (United States)


    This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.

  16. Reliability engineering theory and practice

    Birolini, Alessandro


    This book shows how to build in, evaluate, and demonstrate reliability and availability of components, equipment, systems. It presents the state-of-theart of reliability engineering, both in theory and practice, and is based on the author's more than 30 years experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The structure of the book allows rapid access to practical results. This final edition extend and replace all previous editions. New are, in particular, a strategy to mitigate incomplete coverage, a comprehensive introduction to human reliability with design guidelines and new models, and a refinement of reliability allocation, design guidelines for maintainability, and concepts related to regenerative stochastic processes. The set of problems for homework has been extended. Methods & tools are given in a way that they can be tailored to cover different reliability requirement levels and be used for safety analysis. Because of the Appendice...

  17. Lithium battery safety and reliability

    Levy, Samuel C.

    Lithium batteries have been used in a variety of applications for a number of years. As their use continues to grow, particularly in the consumer market, a greater emphasis needs to be placed on safety and reliability. There is a useful technique which can help to design cells and batteries having a greater degree of safety and higher reliability. This technique, known as fault tree analysis, can also be useful in determining the cause of unsafe behavior and poor reliability in existing designs.

  18. World War II Weather Record Transmittances

    National Oceanic and Atmospheric Administration, Department of Commerce — World War II Weather Record Transmittances are a record of the weather and meteorological data observed during World War II and transferred to the archive. It...

  19. When and why are reliable organizations favored?

    Ethiraj, Sendil; Yi, Sangyoon

    In the 1980s, organization theory witnessed a decade long debate about the incentives and consequences of organizational change. Though the fountainhead of this debate was the observation that reliable organizations are the “consequence” rather than the “cause” of selection forces, much of the en......In the 1980s, organization theory witnessed a decade long debate about the incentives and consequences of organizational change. Though the fountainhead of this debate was the observation that reliable organizations are the “consequence” rather than the “cause” of selection forces, much...... shocks, reliable organizations can in fact outperform their less reliable counterparts if they can take advantage of the knowledge resident in their historical choices. While these results are counter-intuitive, the caveat is that our results are only an existence proof for our theory rather than...... a representation of reality. Thus, our attempt is best characterized as shining a spotlight on a small part of the larger canvas that constitutes the literature on organizational change....

  20. The impact of evidence reliability on sensitivity and bias in decision confidence.

    Boldt, Annika; de Gardelle, Vincent; Yeung, Nick


    Human observers effortlessly and accurately judge their probability of being correct in their decisions, suggesting that metacognitive evaluation is an integral part of decision making. It remains a challenge for most models of confidence, however, to explain how metacognitive judgments are formed and which internal signals influence them. While the decision-making literature has suggested that confidence is based on privileged access to the evidence that gives rise to the decision itself, other lines of research on confidence have commonly taken the view of a multicue model of confidence. The present study aims at manipulating one such cue: the perceived reliability of evidence supporting an initial decision. Participants made a categorical judgment of the average color of an array of eight colored shapes, for which we critically manipulated both the distance of the mean color from the category boundary (evidence strength) and the variability of colors across the eight shapes (evidence reliability). Our results indicate that evidence reliability has a stronger impact on confidence than evidence strength. Specifically, we found that evidence reliability affects metacognitive readout, the mapping from subjectively experienced certainty to expressed confidence, allowing participants to adequately adjust their confidence ratings to match changes in objective task performance across conditions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Interrater reliability of a phenotypic assessment tool for the ear morphology in microtia.

    Luquetti, Daniela V; Saltzman, Babette S; Sie, Kathleen C; Birgfeld, Craig B; Leroux, Brian G; Evans, Kelly N; Smartt, James M; Tieu, David D; Dudley, Daniel J; Heike, Carrie L


    The Elements of Morphology Standard Terminology working group published standardized definitions for external ear morphology. The primary objective of our study was to use these descriptions to evaluate the interrater reliability for specific features associated with microtia. We invited six raters from three different subspecialities to rate 100 ear photographs on 32 features. We calculated overall and within specialty and professional experience intraclass correlation coefficients (ICC) and 95% confidence intervals. A total of 600 possible observations were recorded for each feature. The overall interrater reliability ranged from 0.04 (95% CI: 0.00-0.14) for the width of the antihelix inferior crus to 0.93 (95% CI: 0.91-0.95) for the presence of the inferior crux of the antihelix. The reliability for quantitative characteristics such as length or width of an ear structure was generally lower than the reliability for qualitative characteristics (e.g., presence or absence of an ear structure). Categories with very poor interrater reliability included anti-helix inferior crux width (0.04, 95% CI: 0.00-0.14), crux helix extension (0.17, 95% CI 0.00-0.37), and shape of the incisura (0.14, 95% CI: 0.01-0.27). There were no significant differences in reliability estimates by specialty or professional experience for most variables. Our study showed that it is feasible to systematically characterize many of structures of the ear that are affected in microtia. We incorporated these descriptions into a standardized phenotypic assessment tool (PAT-Microtia) that might be used in multicenter research studies to identify sub-phenotypes for future studies of microtia.

  2. Chapter 3: Photovoltaic Module Stability and Reliability

    Jordan, Dirk; Kurtz, Sarah


    Profits realized from investment in photovoltaic will benefit from decades of reliable operation. Service life prediction through accelerated tests is only possible if indoor tests duplicate power loss and failure modes observed in fielded systems. Therefore, detailing and quantifying power loss and failure modes is imperative. In the first section, we examine recent trends in degradation rates, the gradual power loss observed for different technologies, climates and other significant factors. In the second section, we provide a summary of the most commonly observed failure modes in fielded systems.

  3. Reliability of assessment of adherence to an antimicrobial treatment guideline

    Mol, PGM; Gans, ROB; Panday, PVN; Degener, JE; Laseur, M; Haaijer-Ruskamp, FM


    Assessment procedures for adherence to a guideline must be reliable and credible. The aim of this study was to explore the reliability of assessment of adherence, taking account of the professional backgrounds of the observers. A secondary analysis explored the impact of case characteristics on asse

  4. US Daily Pilot Balloon Observations

    National Oceanic and Atmospheric Administration, Department of Commerce — Pilot Balloon observational forms for the United States. Taken by Weather Bureau and U.S. Army observers. Period of record 1918-1960. Records scanned from the NCDC...

  5. Reliability engineering theory and practice

    Birolini, Alessandro


    Presenting a solid overview of reliability engineering, this volume enables readers to build and evaluate the reliability of various components, equipment and systems. Current applications are presented, and the text itself is based on the author's 30 years of experience in the field.

  6. The Validity of Reliability Measures.

    Seddon, G. M.


    Demonstrates that some commonly used indices can be misleading in their quantification of reliability. The effects are most pronounced on gain or difference scores. Proposals are made to avoid sources of invalidity by using a procedure to assess reliability in terms of upper and lower limits for the true scores of each examinee. (Author/JDH)

  7. Software Reliability through Theorem Proving

    S.G.K. Murthy


    Full Text Available Improving software reliability of mission-critical systems is widely recognised as one of the major challenges. Early detection of errors in software requirements, designs and implementation, need rigorous verification and validation techniques. Several techniques comprising static and dynamic testing approaches are used to improve reliability of mission critical software; however it is hard to balance development time and budget with software reliability. Particularly using dynamic testing techniques, it is hard to ensure software reliability, as exhaustive testing is not possible. On the other hand, formal verification techniques utilise mathematical logic to prove correctness of the software based on given specifications, which in turn improves the reliability of the software. Theorem proving is a powerful formal verification technique that enhances the software reliability for missioncritical aerospace applications. This paper discusses the issues related to software reliability and theorem proving used to enhance software reliability through formal verification technique, based on the experiences with STeP tool, using the conventional and internationally accepted methodologies, models, theorem proving techniques available in the tool without proposing a new model.Defence Science Journal, 2009, 59(3, pp.314-317, DOI:

  8. Reliability engineering in RF CMOS


    In this thesis new developments are presented for reliability engineering in RF CMOS. Given the increase in use of CMOS technology in applications for mobile communication, also the reliability of CMOS for such applications becomes increasingly important. When applied in these applications, CMOS is typically referred to as RF CMOS, where RF stands for radio frequencies.

  9. Reliability in automotive ethernet networks

    Soares, Fabio L.; Campelo, Divanilson R.; Yan, Ying;


    This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular.......This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular....

  10. Estimation of Bridge Reliability Distributions

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...




    Full Text Available The veracity and secrecy of medical information which is transacted over the Internet is vulnerable to attack. But the transaction of such details is mandatory in order to avail the luxury of medical services anywhere, anytime. Especially in a web service enabled system for hospital management, it becomes necessary to address these security issues. It is mandatory that the services guarantee message delivery to software applications, with a chosen level of quality of service (QoS. This paper presents a VDM++ based specification for modelling a security framework for web services with non repudiation to ensure that a party in a dispute cannot repudiate, or refute the validity of a statement or contract and it is ensured that the transaction happens in a reliable manner. This model presents the procedure and technical options to have a secure communication over Internet with web services. Based on the model the Medi - Helper is developed to use the technologies of WS-Security, WS-Reliability and WS-Policy, WSRN in order to create encrypted messages so that the Patient’s medical records are not tampered with when relayed over Internet, and are sent in a reliable manner. In addition to authentication, integrity, confidentiality, as proposed in this paper security framework for healthcare based web services is equipped with non repudiation which is not inclusive in many existing frameworks.


    B.Anni Princy


    Full Text Available A software reliability exemplary projects snags the random process as disillusionments which were the culmination yield of two progressions: emerging faults and initial state values. The predominant classification uses the logistic analysis effort function mounting efficient software on the real time dataset. The detriments of the logistic testing were efficaciously overcome by Pareto distribution. The estimated outline ventures the resolved technique for analyzing the suitable communities and the preeminent of fit for a software reliability progress model. Its constraints are predictable to evaluate the reliability of a software system. The future process will permit for software reliability estimations that can be used both as prominence Indicator, but also for planning and controlling resources, the development times based on the onslaught assignments of the efficient computing and reliable measurement of a software system was competent.

  13. Reliability estimation using kriging metamodel

    Cho, Tae Min; Ju, Byeong Hyeon; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)


    In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.

  14. Reliability-Based Code Calibration

    Faber, M.H.; Sørensen, John Dalsgaard


    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  15. Photovoltaic performance and reliability workshop

    Mrig, L. [ed.


    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  16. Inter-rater reliability of cyclic and non-cyclic task assessment using the hand activity level in appliance manufacturing.

    Paulsen, Robert; Schwatka, Natalie; Gober, Jennifer; Gilkey, David; Anton, Dan; Gerr, Fred; Rosecrance, John


    This study evaluated the inter-rater reliability of the American Conference of Governmental Industrial Hygienists (ACGIH(®)) hand activity level (HAL), an observational ergonomic assessment method used to estimate physical exposure to repetitive exertions during task performance. Video recordings of 858 cyclic and non-cyclic appliance manufacturing tasks were assessed by sixteen pairs of raters using the HAL visual-analog scale. A weighted Pearson Product Moment-Correlation Coefficient was used to evaluate the agreement between the HAL scores recorded by each rater pair, and the mean weighted correlation coefficients for cyclic and non-cyclic tasks were calculated. Results indicated that the HAL is a reliable exposure assessment method for cyclic (r̄-bar w = 0.69) and non-cyclic work tasks (r̄-bar w = 0.68). When the two reliability scores were compared using a two-sample Student's t-test, no significant difference in reliability (p = 0.63) between these work task categories was found. This study demonstrated that the HAL may be a useful measure of exposure to repetitive exertions during cyclic and non-cyclic tasks.

  17. Uruguay - Surface Weather Observations

    National Oceanic and Atmospheric Administration, Department of Commerce — Surface weather observation forms for 26 stations in Uruguay. Period of record 1896-2005, with two to eight observations per day. Files created through a...

  18. Optimization by record dynamics

    Barettin, Daniele; Sibani, Paolo


    Large dynamical changes in thermalizing glassy systems are triggered by trajectories crossing record sized barriers, a behavior revealing the presence of a hierarchical structure in configuration space. The observation is here turned into a novel local search optimization algorithm dubbed record dynamics optimization, or RDO. RDO uses the Metropolis rule to accept or reject candidate solutions depending on the value of a parameter akin to the temperature and minimizes the cost function of the problem at hand through cycles where its ‘temperature’ is raised and subsequently decreased in order to expediently generate record high (and low) values of the cost function. Below, RDO is introduced and then tested by searching for the ground state of the Edwards-Anderson spin-glass model, in two and three spatial dimensions. A popular and highly efficient optimization algorithm, parallel tempering (PT), is applied to the same problem as a benchmark. RDO and PT turn out to produce solutions of similar quality for similar numerical effort, but RDO is simpler to program and additionally yields geometrical information on the system’s configuration space which is of interest in many applications. In particular, the effectiveness of RDO strongly indicates the presence of the above mentioned hierarchically organized configuration space, with metastable regions indexed by the cost (or energy) of the transition states connecting them.

  19. Sporadic aurorae observed in East Asia

    D. M. Willis


    Full Text Available All the accessible auroral observations recorded in Chinese and Japanese histories during the interval AD 1840–1911 are investigated in detail. Most of these auroral records have never been translated into a Western language before. The East Asian auroral reports provide information on the date and approximate location of each auroral observation, together with limited scientific information on the characteristics of the auroral luminosity such as colour, duration, extent, position in the sky and approximate time of occurrence. The full translations of the original Chinese and Japanese auroral records are presented in an appendix, which contains bibliographic details of the various historical sources. (There are no known reliable Korean observations during this interval. A second appendix discusses a few implausible "auroral" records, which have been rejected. The salient scientific properties of all exactly dated and reliable East Asian auroral observations in the interval AD 1840–1911 are summarised succinctly. By comparing the relevant scientific information on exactly dated auroral observations with the lists of great geomagnetic storms compiled by the Royal Greenwich Observatory, and also the tabulated values of the Ak (Helsinki and aa (Greenwich and Melbourne magnetic indices, it is found that 5 of the great geomagnetic storms (aa>150 or Ak>50 during either the second half of the nineteenth century or the first decade of the twentieth century are clearly identified by extensive auroral displays observed in China or Japan. Indeed, two of these great storms produced auroral displays observed in both countries on the same night. Conversely, at least 29 (69% of the 42 Chinese and Japanese auroral observations occurred at times of weak-to-moderate geomagnetic activity (aa or Ak≤50. It is shown that these latter auroral displays are very similar to the more numerous (about 50 examples of sporadic

  20. 50 CFR 37.52 - Records.


    ... special use permit, and reliability and accuracy of all data, information and reports submitted to the... WILDLIFE REFUGE SYSTEM GEOLOGICAL AND GEOPHYSICAL EXPLORATION OF THE COASTAL PLAIN, ARCTIC NATIONAL... and complete records relating to its exploratory activities and to all data and information,...

  1. Reliability analysis of wastewater treatment plants.

    Oliveira, Sílvia C; Von Sperling, Marcos


    This article presents a reliability analysis of 166 full-scale wastewater treatment plants operating in Brazil. Six different processes have been investigated, comprising septic tank+anaerobic filter, facultative pond, anaerobic pond+facultative pond, activated sludge, upflow anaerobic sludge blanket (UASB) reactors alone and UASB reactors followed by post-treatment. A methodology developed by Niku et al. [1979. Performance of activated sludge process and reliability-based design. J. Water Pollut. Control Assoc., 51(12), 2841-2857] is used for determining the coefficients of reliability (COR), in terms of the compliance of effluent biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), total nitrogen (TN), total phosphorus (TP) and fecal or thermotolerant coliforms (FC) with discharge standards. The design concentrations necessary to meet the prevailing discharge standards and the expected compliance percentages have been calculated from the COR obtained. The results showed that few plants, under the observed operating conditions, would be able to present reliable performances considering the compliance with the analyzed standards. The article also discusses the importance of understanding the lognormal behavior of the data in setting up discharge standards, in interpreting monitoring results and compliance with the legislation.

  2. Electronic parts reliability data 1997

    Denson, William; Jaworski, Paul; Mahar, David


    This document contains reliability data on both commercial and military electronic components for use in reliability analyses. It contains failure rate data on integrated circuits, discrete semiconductors (diodes, transistors, optoelectronic devices), resistors, capacitors, and inductors/transformers, all of which were obtained from the field usage of electronic components. At 2,000 pages, the format of this document is the same as RIAC's popular NPRD document which contains reliability data on nonelectronic component and electronic assembly types. Data includes part descriptions, quality level, application environments, point estimates of failure rate, data sources, number of failures, total operating hours, miles, or cycles, and detailed part characteristics.

  3. California dragonfly and damselfly (Odonata) database: temporal and spatial distribution of species records collected over the past century.

    Ball-Damerow, Joan E; Oboyski, Peter T; Resh, Vincent H


    The recently completed Odonata database for California consists of specimen records from the major entomology collections of the state, large Odonata collections outside of the state, previous literature, historical and recent field surveys, and from enthusiast group observations. The database includes 32,025 total records and 19,000 unique records for 106 species of dragonflies and damselflies, with records spanning 1879-2013. Records have been geographically referenced using the point-radius method to assign coordinates and an uncertainty radius to specimen locations. In addition to describing techniques used in data acquisition, georeferencing, and quality control, we present assessments of the temporal, spatial, and taxonomic distribution of records. We use this information to identify biases in the data, and to determine changes in species prevalence, latitudinal ranges, and elevation ranges when comparing records before 1976 and after 1979. The average latitude of where records occurred increased by 78 km over these time periods. While average elevation did not change significantly, the average minimum elevation across species declined by 108 m. Odonata distribution may be generally shifting northwards as temperature warms and to lower minimum elevations in response to increased summer water availability in low-elevation agricultural regions. The unexpected decline in elevation may also be partially the result of bias in recent collections towards centers of human population, which tend to occur at lower elevations. This study emphasizes the need to address temporal, spatial, and taxonomic biases in museum and observational records in order to produce reliable conclusions from such data.

  4. Observing participating observation

    Keiding, Tina Bering


    Current methodology concerning participating observation in general leaves the act of observation unobserved. Approaching participating observation from systems theory offers fundamental new insights into the topic. Observation is always participation. There is no way to escape becoming...... as the idea of the naïve observer becomes a void. Not recognizing and observing oneself as observer and co-producer of empirical data simply leaves the process of observation as the major unobserved absorber of contingency in data production based on participating observation....

  5. Observing participating observation

    Keiding, Tina Bering


    Current methodology concerning participating observation in general leaves the act of observation unobserved. Approaching participating observation from systems theory offers fundamental new insights into the topic. Observation is always participation. There is no way to escape becoming...... as the idea of the naïve observer becomes a void. Not recognizing and observing oneself as observer and co-producer of empirical data simply leaves the process of observation as the major unobserved absorber of contingency in data production based on participating observation....

  6. Presidential Electronic Records Library

    National Archives and Records Administration — PERL (Presidential Electronic Records Library) used to ingest and provide internal access to the Presidential electronic Records of the Reagan, Bush, and Clinton...

  7. UARS spacecraft recorder


    The objective was the design, development, and fabrication of UARS spacecraft recorders. The UARS recorder is a tailored configuration of the RCA Standard Tape recorder STR-108. The specifications and requirements are reviewed.

  8. CMS Records Schedule

    U.S. Department of Health & Human Services — The CMS Records Schedule provides disposition authorizations approved by the National Archives and Records Administration (NARA) for CMS program-related records...

  9. Reliability Modeling of Wind Turbines

    Kostandyan, Erik

    and uncertainties are quantified. Further, estimation of annual failure probability for structural components taking into account possible faults in electrical or mechanical systems is considered. For a representative structural failure mode, a probabilistic model is developed that incorporates grid loss failures...... components. Thus, models of reliability should be developed and applied in order to quantify the residual life of the components. Damage models based on physics of failure combined with stochastic models describing the uncertain parameters are imperative for development of cost-optimal decision tools...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...

  10. Reliability Analysis of Wind Turbines

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard


    In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...

  11. Reliability analysis in intelligent machines

    Mcinroy, John E.; Saridis, George N.


    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  12. Reliability Assessment of Wind Turbines

    Sørensen, John Dalsgaard


    (and safe). In probabilistic design the single components are designed to a level of reliability, which accounts for an optimal balance between failure consequences, cost of operation & maintenance, material costs and the probability of failure. Furthermore, using a probabilistic design basis...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  13. On Bayesian System Reliability Analysis

    Soerensen Ringi, M.


    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  14. VCSEL reliability: a user's perspective

    McElfresh, David K.; Lopez, Leoncio D.; Melanson, Robert; Vacar, Dan


    VCSEL arrays are being considered for use in interconnect applications that require high speed, high bandwidth, high density, and high reliability. In order to better understand the reliability of VCSEL arrays, we initiated an internal project at SUN Microsystems, Inc. In this paper, we present preliminary results of an ongoing accelerated temperature-humidity-bias stress test on VCSEL arrays from several manufacturers. This test revealed no significant differences between the reliability of AlGaAs, oxide confined VCSEL arrays constructed with a trench oxide and mesa for isolation. This test did find that the reliability of arrays needs to be measured on arrays and not be estimated with the data from singulated VCSELs as is a common practice.

  15. Innovations in power systems reliability

    Santora, Albert H; Vaccaro, Alfredo


    Electrical grids are among the world's most reliable systems, yet they still face a host of issues, from aging infrastructure to questions of resource distribution. Here is a comprehensive and systematic approach to tackling these contemporary challenges.

  16. Observing participating observation

    Keiding, Tina Bering


    Current methodology concerning participating observation in general leaves the act of observation unobserved. Approaching participating observation from systems theory offers fundamental new insights into the topic. Observation is always participation. There is no way to escape becoming...... a participant and, as such, co-producer of the observed phenomenon. There is no such thing as a neutral or objective description. As observation deals with differences and process meaning, all descriptions are reconstructions and interpretations of the observed. Hence, the idea of neutral descriptions as well...... as the idea of the naïve observer becomes a void. Not recognizing and observing oneself as observer and co-producer of empirical data simply leaves the process of observation as the major unobserved absorber of contingency in data production based on participating observation....

  17. Observing participating observation

    Keiding, Tina Bering


    Current methodology concerning participating observation in general leaves the act of observation unobserved. Approaching participating observation from systems theory offers fundamental new insights into the topic. Observation is always participation. There is no way to escape becoming...... a participant and, as such, co-producer of the observed phenomenon. There is no such thing as a neutral or objective description. As observation deals with differences and process meaning, all descriptions are re-constructions and interpretations of the observed. Hence, the idea of neutral descriptions as well...... as the idea of the naïve observer becomes a void. Not recognizing and observing oneself as observer and co-producer of empirical data simply leaves the process of observation as the major unobserved absorber of contingency in data production based on participating observation....

  18. Nurses assessing pain with the Nociception Coma Scale: interrater reliability and validity.

    Vink, Peter; Eskes, Anne Maria; Lindeboom, Robert; van den Munckhof, Pepijn; Vermeulen, Hester


    The Nociception Coma Scale (NCS) is a pain observation tool, developed for patients with disorders of consciousness (DOC) due to acquired brain injury (ABI). The aim of this study was to assess the interrater reliability of the NCS and NCS-R among nurses for the assessment of pain in ABI patients with DOC. A secondary aim was further validation of both scales by assessing its discriminating abilities for the presence or absence of pain. Hospitalized patients with ABI (n = 10) were recorded on film during three conditions: baseline, after tactile stimulation, and after noxious stimulation. All stimulations were part of daily treatment for these patients. The 30 recordings were assessed with the NCS and NCS-R by 27 nurses from three university hospitals in the Netherlands. Each nurse viewed 9 to 12 recordings, totaling 270 assessments. Interrater reliability of the NCS/NCS-R items and total scores were estimated by intraclass correlations (ICC), which showed excellent and equal average measures reliability for the NCS and NCR-R total scores (ICC 0.95), and item scores (range 0.87-0.95). Secondary analysis was performed to assess differences in ICCs among nurses' education and experience and to assess the scales discriminating properties for the presence of pain. The NCS and NCS-R are valid and reproducible scales that can be used by nurses with an associate (of science) in nursing degree or baccalaureate (of science) in nursing degree. It seems that more experience with ABI patients is not a predictor for good agreement in the assessment of the NCS(-R).

  19. Accelerator Availability and Reliability Issues

    Steve Suhring


    Maintaining reliable machine operations for existing machines as well as planning for future machines' operability present significant challenges to those responsible for system performance and improvement. Changes to machine requirements and beam specifications often reduce overall machine availability in an effort to meet user needs. Accelerator reliability issues from around the world will be presented, followed by a discussion of the major factors influencing machine availability.

  20. Software Reliability Experimentation and Control

    Kai-Yuan Cai


    This paper classifies software researches as theoretical researches, experimental researches, and engineering researches, and is mainly concerned with the experimental researches with focus on software reliability experimentation and control. The state-of-the-art of experimental or empirical studies is reviewed. A new experimentation methodology is proposed, which is largely theory discovering oriented. Several unexpected results of experimental studies are presented to justify the importance of software reliability experimentation and control. Finally, a few topics that deserve future investigation are identified.

  1. Digital Audio Legal Recorder

    Department of Transportation — The Digital Audio Legal Recorder (DALR) provides the legal recording capability between air traffic controllers, pilots and ground-based air traffic control TRACONs...

  2. Guinness World Records: Presenting certificates to CERN

    Rao, Achintya


    The latest edition of the Guinness Book of World Records features CERN, crediting the CMS and ATLAS collaborations for the first observation of a Higgs boson. On 20 August, representatives of Guinness World Records visit CERN to hand over certificates for the record.

  3. MEMS reliability: coming of age

    Douglass, Michael R.


    In today's high-volume semiconductor world, one could easily take reliability for granted. As the MOEMS/MEMS industry continues to establish itself as a viable alternative to conventional manufacturing in the macro world, reliability can be of high concern. Currently, there are several emerging market opportunities in which MOEMS/MEMS is gaining a foothold. Markets such as mobile media, consumer electronics, biomedical devices, and homeland security are all showing great interest in microfabricated products. At the same time, these markets are among the most demanding when it comes to reliability assurance. To be successful, each company developing a MOEMS/MEMS device must consider reliability on an equal footing with cost, performance and manufacturability. What can this maturing industry learn from the successful development of DLP technology, air bag accelerometers and inkjet printheads? This paper discusses some basic reliability principles which any MOEMS/MEMS device development must use. Examples from the commercially successful and highly reliable Digital Micromirror Device complement the discussion.

  4. An Approach to Online Reliability Evaluation and Prediction of Mechanical Transmission Components

    Matthias Maisch; Bernd Bertsche; Ralf Hettich


    New development trends in electronic operating data logging systems enable classification, recording and storage of load spectrums of mechanical transmission components during usage. Based on this fact, the application of online reliability evaluation and reliability prediction procedures are presented. Different methods are considered to calculate reliability, depending on actual load spectrum and a Wohler curve. The prediction of a reliability trend is analyzed by the application of time series models. For this purpose, exponential smoothing model, regression model, and the ARIMA model are considered to evaluate data and predict an decreasing reliability trends during usage.

  5. Reliability of spontaneous electrodermal activity in humans as a function of sleep stages.

    Freixa i Baqué, E


    This study was designed to examine the reliability of spontaneous electrodermal activity (EDA) as a function of the sleep stages in human subjects. Recordings were made from 10 volunteer paid male students during four complete nights. The results show that: (a) reliability of EDA varies as a function of sleep stages for both frequency and amplitude parameters although in a different way: frequency reliability shows a U-shape curve whereas amplitude reliability grows monotically with the depth of sleep and; (b) paradoxical sleep appears to be the most reliable stage for both frequency and amplitude variables. These results are compared to those obtained in waking human subjects and in sleeping cats.

  6. Estudo da validade e confiabilidade intra e interobservador da versão modificada do teste de Schöber modificado em indivíduos com lombalgia Study of validity and intra and inter-observer reliability of modified-modified Schöber test in subjects with low-back pain

    Christiane de Souza Guerino Macedo


    Full Text Available Em pacientes com lombalgia, mensura-se a amplitude de movimento (ADM da coluna lombar por meio da versão modificada do teste de Schöber modificado (MTSM, mas suas propriedades psicométricas não são comprovadas para uso clínico. Este estudo verificou a validade e confiabilidade intra e interobservador do MTSM em indivíduos com lombalgia, comparando as medidas da ADM com as obtidas por meio de radiografia, método considerado padrão-ouro. Participaram 20 voluntários com lombalgia, de ambos os sexos, funcionários de um Hospital Universitário. O MTSM foi aplicado duas vezes por dois avaliadores. As medidas obtidas pelo teste e por radiografia foram comparadas usando o coeficiente de correlação de Pearson, obtendo-se r=0,14, ou seja, correlação fraca. O coeficiente de correlação intraclasse (CCI dos MTSM intra-observador foi 0,96 (IC 95% 0,91;0,98 e interobservador 0,93 (IC 95% 0,84;0,97, indicando alta confiabilidade; o teste de Bland & Altman mostrou alta concordância intra e interobservador, com valores de -0,21 e -0,28, respectivamente. Embora tenha sido encontrada alta confiabilidade intra e interobservador na aplicação da versão modificada do teste de Schöber modificado, este apresentou baixa validade para medir a ADM da coluna lombar, quando comparado ao padrão-ouro.In patients with low-back pain the lumbar spine range of motion (ROM is often measured by the modified version of the modified Schöber test (MMST, but its psychometric properties have not been ascertained for clinical use. The purpose here was to verify intra and inter-observer validity and reliability of the MMST in subjects with low-back pain, and to compare obtained ROM measures to those obtained by radiography, taken as gold standard. The study involved 20 subjects with chronic low-back pain, of both sexes, employees at a university hospital. The MMST was applied twice by two examiners each. The Pearson correlation coefficient found when comparing

  7. The Impact of the Revised Sunspot Record on Solar Irradiance Reconstructions

    Kopp, G.; Krivova, N.; Wu, C. J.; Lean, J.


    Reliable historical records of the total solar irradiance (TSI) are needed to assess the extent to which long-term variations in the Sun's radiant energy that is incident upon Earth may exacerbate (or mitigate) the more dominant warming in recent centuries that is due to increasing concentrations of greenhouse gases. We investigate the effects that the new Sunspot Index and Long-term Solar Observations (SILSO) sunspot-number time series may have on model reconstructions of the TSI. In contemporary TSI records, variations on timescales longer than about a day are dominated by the opposing effects of sunspot darkening and facular brightening. These two surface magnetic features, retrieved either from direct observations or from solar-activity proxies, are combined in TSI models to reproduce the current TSI observational record. Indices that manifest solar-surface magnetic activity, in particular the sunspot-number record, then enable reconstructing historical TSI. Revisions of the sunspot-number record therefore affect the magnitude and temporal structure of TSI variability on centennial timescales according to the model reconstruction methods that are employed. We estimate the effects of the new SILSO record on two widely used TSI reconstructions, namely the NRLTSI2 and the SATIRE models. We find that the SILSO record has little effect on either model after 1885, but leads to solar-cycle fluctuations with greater amplitude in the TSI reconstructions prior. This suggests that many eighteenth- and nineteenth-century cycles could be similar in amplitude to those of the current Modern Maximum. TSI records based on the revised sunspot data do not suggest a significant change in Maunder Minimum TSI values, and from comparing this era to the present, we find only very small potential differences in the estimated solar contributions to the climate with this new sunspot record.

  8. Chronic depression: development and evaluation of the luebeck questionnaire for recording preoperational thinking (LQPT

    Kühnen Tanja


    Full Text Available Abstract Background A standardized instrument for recording the specific cognitive psychopathology of chronically depressed patients has not yet been developed. Up until now, preoperational thinking of chronically depressed patients has only been described in case studies, or through the external observations of therapists. The aim of this study was to develop and evaluate a standardized self-assessment instrument for measuring preoperational thinking that sufficiently conforms to the quality criteria for test theory. Methods The "Luebeck Questionnaire for Recording Preoperational Thinking (LQPT" was developed and evaluated using a german sample consisting of 30 episodically depressed, 30 chronically depressed and 30 healthy volunteers. As an initial step the questionnaire was subjected to an item analysis and a final test form was compiled. In a second step, reliability and validity tests were performed. Results Overall, the results of this study showed that the LQPT is a useful, reliable and valid instrument. The reliability (split-half reliability 0.885; internal consistency 0.901 and the correlations with other instruments for measuring related constructs (control beliefs, interpersonal problems, stress management proved to be satisfactory. Chronically depressed patients, episodically depressed patients and healthy volunteers could be distinguished from one another in a statistically significant manner (p Conclusion The questionnaire fulfilled the classical test quality criteria. With the LQPT there is an opportunity to test the theory underlying the CBASP model.

  9. Optimal Implementations for Reliable Circadian Clocks

    Hasegawa, Yoshihiko; Arita, Masanori


    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  10. Reliability of plantar pressure platforms.

    Hafer, Jocelyn F; Lenhoff, Mark W; Song, Jinsup; Jordan, Joanne M; Hannan, Marian T; Hillstrom, Howard J


    Plantar pressure measurement is common practice in many research and clinical protocols. While the accuracy of some plantar pressure measuring devices and methods for ensuring consistency in data collection on plantar pressure measuring devices have been reported, the reliability of different devices when testing the same individuals is not known. This study calculated intra-mat, intra-manufacturer, and inter-manufacturer reliability of plantar pressure parameters as well as the number of plantar pressure trials needed to reach a stable estimate of the mean for an individual. Twenty-two healthy adults completed ten walking trials across each of two Novel emed-x(®) and two Tekscan MatScan(®) plantar pressure measuring devices in a single visit. Intraclass correlation (ICC) was used to describe the agreement between values measured by different devices. All intra-platform reliability correlations were greater than 0.70. All inter-emed-x(®) reliability correlations were greater than 0.70. Inter-MatScan(®) reliability correlations were greater than 0.70 in 31 and 52 of 56 parameters when looking at a 10-trial average and a 5-trial average, respectively. Inter-manufacturer reliability including all four devices was greater than 0.70 for 52 and 56 of 56 parameters when looking at a 10-trial average and a 5-trial average, respectively. All parameters reached a value within 90% of an unbiased estimate of the mean within five trials. Overall, reliability results are encouraging for investigators and clinicians who may have plantar pressure data sets that include data collected on different devices.

  11. Electronic Health Records

    ... Loss Surgery? A Week of Healthy Breakfasts Shyness Electronic Health Records KidsHealth > For Teens > Electronic Health Records Print A A A What's in ... t happen overnight, they are coming. Understanding EHRs Electronic health records (EHR) — also called electronic medical records ( ...

  12. Your Medical Records

    ... Surgery? A Week of Healthy Breakfasts Shyness Your Medical Records KidsHealth > For Teens > Your Medical Records A ... Records? en español Tus historias clínicas What Are Medical Records? Each time you climb up on a ...

  13. Surgical medical record

    Bulow, S.


    A medical record is presented on the basis of selected linguistic pearls collected over the years from surgical case records Udgivelsesdato: 2008/12/15......A medical record is presented on the basis of selected linguistic pearls collected over the years from surgical case records Udgivelsesdato: 2008/12/15...

  14. Electronic Health Records

    ... Loss Surgery? A Week of Healthy Breakfasts Shyness Electronic Health Records KidsHealth > For Teens > Electronic Health Records A A A What's in this ... t happen overnight, they are coming. Understanding EHRs Electronic health records (EHR) — also called electronic medical records ( ...

  15. Spoken Records. Third Edition.

    Roach, Helen

    Surveying 75 years of accomplishment in the field of spoken recording, this reference work critically evaluates commercially available recordings selected for excellence of execution, literary or historical merit, interest, and entertainment value. Some types of spoken records included are early recording, documentaries, lectures, interviews,…

  16. Development and the reliability and validity test of observation table with influencing factors for catheter-associated urinary tract infections in critical patients%重症患者导管相关性尿路感染影响因素观察表的编制及信效度检验

    杨青兰; 曾登芬; 刘蕾; 何海燕; 杨文群; 伍亚舟


    目的:设计重症患者导管相关性尿路感染影响因素观察表,并检验其信度及效度,为评估重症患者导管相关性尿路感染的影响因素提供有效的工具。方法通过文献回顾和头脑风暴法形成初始调查条目,结合专家访谈及专题小组讨论确定重症患者导管相关性尿路感染影响因素观察表最终条目。2014年6月,采用方便抽样方法,对第三军医大学大坪医院的130例留置尿管的重症患者进行观察,采用项目分析、探索性因子分析及信效度检验对观察表进行评价。结果重症患者导管相关性尿路感染影响因素观察表包括5个维度和26个条目;累积方差贡献率为73.752%, Cronbachα系数为0.869,分半信度Spearman-Brown折半系数为0.828,观察表各因子与总分之间的相关系数为0.652~0.873(P<0.01),观察表各因子间相关系数为0.311~0.823(P<0.01)。结论观察表具有良好的信度和效度,可作为评估重症患者导管相关性尿路感染影响因素的工具。%Objective To develop a observation table with influencing factors for catheter-associated urinary tract infections in critical patients and verify its reliability and validity. Methods Literature review, brainstorming method, expert interview and group discussion were performed to identify items of the form. By convenience sampling method, totally 130 severe patients with urinary catheter were investigated by this scale. This form was evaluated by item analysis, exploratory factor analysis and reliability and validity test. Results The observation table with influencing factors for Catheter-associated Urinary Tract Infections in critical patients consisted of five dimensions and 26 items. The cumulative contribution of variance was 73.752%, the Cronbach′s alpha coefficient was 0.869 and the Spearman-Brown split-half coefficient was 0.828. Regarding to the construct validity, the correlation coefficient

  17. 78 FR 38311 - Reliability Technical Conference Agenda


    ..., Fix, Track, and Report program enhanced reliability? b. What is the status of the NERC Reliability... Energy Regulatory Commission Reliability Technical Conference Agenda Reliability Technical Docket No. AD13-6-000 Conference. North American Electric Docket No. RC11-6-004 Reliability Corporation....

  18. Observation of an excess at 30 GeV in the opposite sign di-muon spectra of ${\\rm Z} \\to b\\overline{ b} + {\\rm X}$ events recorded by the ALEPH experiment at LEP

    Heister, Arno


    The re-analysis of the archived data recorded at the ${\\rm Z}^{0}$ resonance by the ALEPH experiment at LEP during the years 1992-1995 shows an excess in the opposite sign di-muon mass spectra at 30.40 GeV in events containing b quarks. The excess has a natural width of 1.78 GeV. A compatible but smaller excess is visible in the opposite di-electron mass spectrum as well.

  19. Reliability of Spike Timing in Neocortical Neurons

    Mainen, Zachary F.; Sejnowski, Terrence J.


    It is not known whether the variability of neural activity in the cerebral cortex carries information or reflects noisy underlying mechanisms. In an examination of the reliability of spike generation using recordings from neurons in rat neocortical slices, the precision of spike timing was found to depend on stimulus transients. Constant stimuli led to imprecise spike trains, whereas stimuli with fluctuations resembling synaptic activity produced spike trains with timing reproducible to less than 1 millisecond. These data suggest a low intrinsic noise level in spike generation, which could allow cortical neurons to accurately transform synaptic input into spike sequences, supporting a possible role for spike timing in the processing of cortical information by the neocortex.

  20. Photovoltaic concentrator module reliability: Failure modes and qualification

    Richards, E.H.


    The purpose of this paper is to discuss the current issues of interest in PV concentrator module reliability. Before describing in detail the reliability concerns about PV concentrator modules, it should be emphasized that, with proper design and attention to quality control, there is nothing to prevent concentrator modules from being as reliable as crystalline-silicon flat-plate modules have proven to be. Concentrator modules tested outdoors, as well as in the first-generation systems, have generally been reliable, and no degradation in cell output has been observed. Also, although they are not included in this paper, there are a few items currently of concern with the reliability of other PV module technologies that are not issues with PV concentrator technology, such as the stability of amorphous-silicon efficiencies and concerns about EVA encapsulation.

  1. Reliability Based Ship Structural Design

    Dogliani, M.; Østergaard, C.; Parmentier, G.;


    with developments of models of load effects and of structural collapse adopted in reliability formulations which aim at calibrating partial safety factors for ship structural design. New probabilistic models of still-water load effects are developed both for tankers and for containerships. New results are presented......This paper deals with the development of different methods that allow the reliability-based design of ship structures to be transferred from the area of research to the systematic application in current design. It summarises the achievements of a three-year collaborative research project dealing...... structure of several tankers and containerships. The results of the reliability analysis were the basis for the definition of a target safety level which was used to asses the partial safety factors suitable for in a new design rules format to be adopted in modern ship structural design. Finally...

  2. Reliability Modeling of Wind Turbines

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... the actions should be made and the type of actions requires knowledge on the accumulated damage or degradation state of the wind turbine components. For offshore wind turbines, the action times could be extended due to weather restrictions and result in damage or degradation increase of the remaining...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...

  3. Reliability Assessment of Wind Turbines

    Sørensen, John Dalsgaard


    Wind turbines can be considered as structures that are in between civil engineering structures and machines since they consist of structural components and many electrical and machine components together with a control system. Further, a wind turbine is not a one-of-a-kind structure...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  4. Reliability assessment of Wind turbines

    Sørensen, John Dalsgaard


    Wind turbines can be considered as structures that are in between civil engineering structures and machines since they consist of structural components and many electrical and machine components together with a control system. Further, a wind turbine is not a one-of-a-kind structure...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  5. Reliability of Wave Energy Converters

    Ambühl, Simon

    . Structural reliability considerations and optimizations impact operation and maintenance (O&M) costs as well as the initial investment costs. Furthermore, there is a control system for WEC applications which defines the harvested energy but also the loads onto the structure. Therefore, extreme loads but also...... WEPTOS. Calibration of safety factors are performed for welded structures at theWavestar device including different control systems for harvesting energy from waves. In addition, a case study of different O&M strategies for WECs is discussed, and an example of reliability-based structural optimization......There are many different working principles for wave energy converters (WECs) which are used to produce electricity from waves. In order for WECs to become successful and more competitive to other renewable electricity sources, the consideration of the structural reliability of WECs is essential...

  6. Reliability Characteristics of Power Plants

    Zbynek Martinek


    Full Text Available This paper describes the phenomenon of reliability of power plants. It gives an explanation of the terms connected with this topic as their proper understanding is important for understanding the relations and equations which model the possible real situations. The reliability phenomenon is analysed using both the exponential distribution and the Weibull distribution. The results of our analysis are specific equations giving information about the characteristics of the power plants, the mean time of operations and the probability of failure-free operation. Equations solved for the Weibull distribution respect the failures as well as the actual operating hours. Thanks to our results, we are able to create a model of dynamic reliability for prediction of future states. It can be useful for improving the current situation of the unit as well as for creating the optimal plan of maintenance and thus have an impact on the overall economics of the operation of these power plants.

  7. Component reliability for electronic systems

    Bajenescu, Titu-Marius I


    The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.

  8. New Approaches to Reliability Assessment

    Ma, Ke; Wang, Huai; Blaabjerg, Frede


    of energy. New approaches for reliability assessment are being taken in the design phase of power electronics systems based on the physics-of-failure in components. In this approach, many new methods, such as multidisciplinary simulation tools, strength testing of components, translation of mission profiles......Power electronics are facing continuous pressure to be cheaper and smaller, have a higher power density, and, in some cases, also operate at higher temperatures. At the same time, power electronics products are expected to have reduced failures because it is essential for reducing the cost......, and statistical analysis, are involved to enable better prediction and design of reliability for products. This article gives an overview of the new design flow in the reliability engineering of power electronics from the system-level point of view and discusses some of the emerging needs for the technology...

  9. Structural Optimization with Reliability Constraints

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle


    During the last 25 years considerable progress has been made in the fields of structural optimization and structural reliability theory. In classical deterministic structural optimization all variables are assumed to be deterministic. Due to the unpredictability of loads and strengths of actual...... structures it is now widely accepted that structural problems are non-deterministic. Therefore, some of the variables have to be modelled as random variables/processes and a reliability-based design philosophy should be used, Comell [1], Moses [2], Ditlevsen [3] and Thoft-Christensen & Baker [ 4......]. In this paper we consider only structures which can be modelled as systems of elasto-plastic elements, e.g. frame and truss structures. In section 2 a method to evaluate the reliability of such structural systems is presented. Based on a probabilistic point of view a modern structural optimization problem...

  10. Reliable and valid assessment of performance in thoracoscopy

    Konge, Lars; Lehnert, Per; Hansen, Henrik Jessen


    , respect for tissue, precision of operative technique, creation and placement of ports, localization of pathologic tissue, use of staplers, retrieval of tissue in bag and placement of chest tube. Fifty consecutive thoracoscopic wedge resections were recorded and assessed blindly and independently by two...... was not significant (P = 0.10, ES = 0.64). The inter-rater reliability was acceptable (Cronbach's alpha 0.71). CONCLUSIONS: This tool for assessing performance in thoracoscopy is reliable and valid. It can provide unbiased feedback to trainees, and can be used to evaluate new teaching curricula, i.e. simulation...

  11. Reliability-based optimization of engineering structures

    Sørensen, John Dalsgaard


    The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplitied with structural optimization. The basic reliability-based optimization prob...


    Filho, Geraldo Motta; Galvão, Marcus Vinicius; Monteiro, Martim; Cohen, Marcio; Brandão, Bruno


    The study's objective is to evaluate the characteristics and problems of patients who underwent shoulder arthroplasties between July 2004 and November 2006. Methodology: During the period of the study, 145 shoulder arthroplasties were performed. A prospective protocol was used for every patient; demographic, clinical and surgical procedure data were collected. All gathered data were included in the data base. The patients were divided in three major groups: fractures, degenerative diseases and trauma sequels. Information obtained from the data base was correlated in order to determine patients' epidemiologic, injuries, and surgical procedure profiles. Results: Of the 145 shoulder arthroplasties performed, 37% presented trauma sequels, 30% degenerative diseases, and 33% proximal humerus fracture. 12% of the cases required total arthroplasties and 88% partial arthroplasties. Five major complications were observed on early postoperative period. Conclusion: Shoulder arthroplasties have become a common procedure in orthopaedic practice. Surgical records are important in evidencing progressive evolution and in enabling future clinical outcomes evaluation. PMID:26998463

  13. Metrological Reliability of Medical Devices

    Costa Monteiro, E.; Leon, L. F.


    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  14. Reliability Assessment of Concrete Bridges

    Thoft-Christensen, Palle; Middleton, C. R.

    This paper is partly based on research performed for the Highways Agency, London, UK under the project DPU/9/44 "Revision of Bridge Assessment Rules Based on Whole Life Performance: concrete bridges". It contains the details of a methodology which can be used to generate Whole Life (WL) reliability...... profiles. These WL reliability profiles may be used to establish revised rules for concrete bridges. This paper is to some extend based on Thoft-Christensen et. al. [1996], Thoft-Christensen [1996] et. al. and Thoft-Christensen [1996]....

  15. Reliability Management for Information System

    李睿; 俞涛; 刘明伦


    An integrated intelligent management is presented to help organizations manage many heterogeneous resources in their information system. A general architecture of management for information system reliability is proposed, and the architecture from two aspects, process model and hierarchical model, described. Data mining techniques are used in data analysis. A data analysis system applicable to real-time data analysis is developed by improved data mining on the critical processes. The framework of the integrated management for information system reliability based on real-time data mining is illustrated, and the development of integrated and intelligent management of information system discussed.

  16. Reliability of seizure semiology in patients with 2 seizure foci.

    Rathke, Kevin M; Schäuble, Barbara; Fessler, A James; So, Elson L


    To determine whether seizure semiology is reliable in localizing and distinguishing seizures at 2 independent brain foci in the same patient. Two masked reviewers localized seizures from 2 foci by their clinical semiology and intracranial electroencephalograms (EEGs). Epilepsy monitoring unit of referral comprehensive epilepsy program. Seventeen consecutive patients (51 seizures) with sufficient video and intracranial EEG data were identified by reviewing medical records of 366 patients older than 10 years. The primary outcome measures were interobserver agreement between the 2 masked reviewers; the proportion of seizures localized by semiology; the proportion of localized seizures concordant with intracranial EEG localization; and comparison between concordant and nonconcordant seizures in latency of intracranial EEG seizure spread. Interobserver agreement was 41% (κ score, 0.16). Only 30 of 51 seizures (59%) were localized by seizure semiology. The focus localized by semiology was concordant with the location of intracranial EEG seizure onset in 16 of 30 seizures (53%). No significant difference was observed between concordant and nonconcordant seizures in relation to the speed with which the EEG discharge spread from the location of seizure onset to another lobar region (P = .09, Wilcoxon rank sum test). Clinical seizure semiology is not as useful as intracranial EEG in localizing seizure onset in patients with dual seizure foci.

  17. Reliability and validity of the Spanish version of the modified Yale Preoperative Anxiety Scale.

    Jerez, C; Ullán, A M; Lázaro, J J


    To minimise preoperative stress and increase child cooperation during induction of anaesthesia is one of the most important perioperative objectives. The modified Yale Preoperative Anxiety Scale was developed to evaluate anxiety. The aim of this study was to translate into Spanish, and validate the psychometric properties of the Spanish version of this scale. The Spanish translation of the scale was performed following the World Health Organisation guidelines. During induction of anaesthesia, 81 children aged 2 to 12 years were recorded. Two observers evaluated the recordings independently. Content validity index of modified Yale Preoperative Anxiety Scale Spanish version was assessed. Weighted Kappa was calculated to measure interobserver agreement, and the Pearson correlation between the Induction Compliance Checklist and the modified Yale Preoperative Anxiety Scale was determined. The Spanish version obtained high content validity (0.91 to 0.98). Reliability analysis using weighted Kappa statistics revealed that interobserver agreement ranged from 0.54 to 0.75. Concurrent validity was high (r=0.94; P<.001). Validated assessment tools are needed to evaluate interventions to reduce child preoperative anxiety. The Spanish version of the modified Yale Preoperative Anxiety Scale evaluated in this study has shown good psychometric properties of reliability and validity. Copyright © 2015 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Human reliability analysis of control room operators

    Santos, Isaac J.A.L.; Carvalho, Paulo Victor R.; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)


    Human reliability is the probability that a person correctly performs some system required action in a required time period and performs no extraneous action that can degrade the system Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. Significant progress has been made in the HRA field during the last years, mainly in nuclear area. Some first-generation HRA methods were developed, as THERP (Technique for human error rate prediction). Now, an array of called second-generation methods are emerging as alternatives, for instance ATHEANA (A Technique for human event analysis). The ergonomics approach has as tool the ergonomic work analysis. It focus on the study of operator's activities in physical and mental form, considering at the same time the observed characteristics of operator and the elements of the work environment as they are presented to and perceived by the operators. The aim of this paper is to propose a methodology to analyze the human reliability of the operators of industrial plant control room, using a framework that includes the approach used by ATHEANA, THERP and the work ergonomics analysis. (author)

  19. Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.

    Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David


    Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).

  20. Modelling and Simulation of Scraper Reliability for Maintenance

    HUANG Liang-pei; LU Zhong-hai; GONG Zheng-li


    A scraper conveyor is a kind of heavy machinery which can continuously transport goods and widely used in mines, ports and store enterprises. Since scraper failure rate directly affects production costs and production capacity, the evaluation and the prediction of scraper conveyor reliability are important for these enterprises. In this paper, the reliabilities of different parts are classified and discussed according to their structural characteristics and different failure factors. Based on the component's time-to-failure density function, the reliability model of scraper chain is constructed to track the age distribution of part population and the reliability change of the scraper chain. Based on the stress-strength interference model, considering the decrease of strength due to fatigue failure, the dynamic reliability model of such component as gear, axis is developed to observe the change of the part reliability with the service time of scraper. Finally, system reliability model of the scraper is established for the maintenance to simulate and calculate the scraper reliability.

  1. An Alternative to Process Recording

    Baker, Joan; And Others


    Some disadvantages in the use of process recordings as an assessment and teaching tool for evaluating the communication skills of the student in nurse-client interactions are discussed. A more useful alternative process requires actual observation and subsequent participation by the instructor during student-client interviews. (EC)



  3. Space solar array reliability: A study and recommendations

    Brandhorst, Henry W., Jr.; Rodiek, Julie A.


    Providing reliable power over the anticipated mission life is critical to all satellites; therefore solar arrays are one of the most vital links to satellite mission success. Furthermore, solar arrays are exposed to the harshest environment of virtually any satellite component. In the past 10 years 117 satellite solar array anomalies have been recorded with 12 resulting in total satellite failure. Through an in-depth analysis of satellite anomalies listed in the Airclaim's Ascend SpaceTrak database, it is clear that solar array reliability is a serious, industry-wide issue. Solar array reliability directly affects the cost of future satellites through increased insurance premiums and a lack of confidence by investors. Recommendations for improving reliability through careful ground testing, standardization of testing procedures such as the emerging AIAA standards, and data sharing across the industry will be discussed. The benefits of creating a certified module and array testing facility that would certify in-space reliability will also be briefly examined. Solar array reliability is an issue that must be addressed to both reduce costs and ensure continued viability of the commercial and government assets on orbit.

  4. On reliability analysis of multi-categorical forecasts

    J. Bröcker


    Full Text Available Reliability analysis of probabilistic forecasts, in particular through the rank histogram or Talagrand diagram, is revisited. Two shortcomings are pointed out: Firstly, a uniform rank histogram is but a necessary condition for reliability. Secondly, if the forecast is assumed to be reliable, an indication is needed how far a histogram is expected to deviate from uniformity merely due to randomness. Concerning the first shortcoming, it is suggested that forecasts be grouped or stratified along suitable criteria, and that reliability is analyzed individually for each forecast stratum. A reliable forecast should have uniform histograms for all individual forecast strata, not only for all forecasts as a whole. As to the second shortcoming, instead of the observed frequencies, the probability of the observed frequency is plotted, providing and indication of the likelihood of the result under the hypothesis that the forecast is reliable. Furthermore, a Goodness-Of-Fit statistic is discussed which is essentially the reliability term of the Ignorance score. The discussed tools are applied to medium range forecasts for 2 m-temperature anomalies at several locations and lead times. The forecasts are stratified along the expected ranked probability score. Those forecasts which feature a high expected score turn out to be particularly unreliable.

  5. Upper Air Observations - Synoptic Code

    National Oceanic and Atmospheric Administration, Department of Commerce — Daily radiosonde and rawinsonde observations at standard and significant levels, recorded in synoptic code. Period of record 1950-1951.

  6. A bayesian belief network for reliability assessment

    Gran, Bjoern Axel; Helminen, Atte


    The research programme at the Halden Project on software assessment is argumented through a joint project with VVT Automation. The objective of this co-operative project is to combine previous presented Bayesian Belief Networks for a software safety standard, with BBNs on the reliability estimation of software based digital systems. The results on applying BBN methodology with a software safety standard is based upon previous research by the Halden Project, while the results on the reliability estimation is based on a Master's Thesis by Helminen. The report should be considered as a progress report in the more long-term activity on the use of BBNs as support for safety assessment of programmable systems. In this report it is discussed how the two approaches can be merged together into one Bayesian Network, and the problems with merging are pinpointed. The report also presents and discusses the approaches applied by the Halden Project and VTT, including the differences in the expert judgement of the parameters used in the Bayesian Network. Finally, the report gives some experimental results based on observations from applying the method for an evaluation of a real, safety related programmable system that has been developed according to the avionic standard DO-178B. This demonstrates how hard and soft evidences can be combined for a reliability assessment. The use of Bayesian Networks provides a framework, combining consistent application of probability calculus with the ability to model complex structures, as e.g. standards, as a simple understandable network, where all possible evidence can be introduced to the reliability estimation in a compatible way. (Author)

  7. Reliability of photographic posture analysis of adolescents.

    Hazar, Zeynep; Karabicak, Gul Oznur; Tiftikci, Ugur


    [Purpose] Postural problems of adolescents needs to be evaluated accurately because they may lead to greater problems in the musculoskeletal system as they develop. Although photographic posture analysis has been frequently used, more simple and accessible methods are still needed. The purpose of this study was to investigate the inter- and intra-rater reliability of photographic posture analysis using MB-ruler software. [Subjects and Methods] Subjects were 30 adolescents (15 girls and 15 boys, mean age: 16.4±0.4 years, mean height 166.3±6.7 cm, mean weight 63.8±15.1 kg) and photographs of their habitual standing posture photographs were taken in the sagittal plane. For the evaluation of postural angles, reflective markers were placed on anatomical landmarks. For angular measurements, MB-ruler (Markus Bader- MB Software Solutions, triangular screen ruler) was used. Photographic evaluations were performed by two observers with a repetition after a week. Test-retest and inter-rater reliability evaluations were calculated using intra-class correlation coefficients (ICC). [Results] Inter-rater (ICC>0.972) and test-retest (ICC>0.774) reliability were found to be in the range of acceptable to excellent. [Conclusion] Reference angles for postural evaluation were found to be reliable and repeatable. The present method was found to be an easy and non-invasive method and it may be utilized by researchers who are in search of an alternative method for photographic postural assessments.

  8. Reliability of quantitative content analyses

    Enschot-van Dijk, R. van


    Reliable coding of stimuli is a daunting task that often yields unsatisfactory results. This paper discusses a case study in which tropes (e.g., metaphors, puns) in TV commercials were analyzed as well the extent and location of verbal and visual anchoring (i.e., explanation) of these tropes. After

  9. Photovoltaic performance and reliability workshop

    Kroposki, B


    This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime and developing models for PV module lifetime; examining and determining failure and degradation mechanisms in PV modules; combining IEEE/IEC/UL testing procedures; AC module performance and reliability testing; inverter reliability/qualification testing; standardization of utility interconnect requirements for PV systems; need activities to separate variables by testing individual components of PV systems (e.g. cells, modules, batteries, inverters,charge controllers) for individual reliability and then test them in actual system configurations; more results reported from field experience on modules, inverters, batteries, and charge controllers from field deployed PV systems; and system certification and standardized testing for stand-alone and grid-tied systems.


    Dr Obe

    Evaluation of the reliability of a primary cell took place in three stages: 192 cells went through a ... CCV - Closed Circuit Voltage, the voltage at the terminals of a battery when it is under an electrical ... Cylindrical spirally wound cells have the.

  11. Finding Reliable Health Information Online

    ... at NHGRI About About the Institute Budget and Financial Information Divisions Director's Page How to Contact Us Institute ... una búsqueda saludable en Internet Finding Reliable Health Information Online As Internet users quickly discover, an enormous amount of health information ...

  12. Reliability of subjective wound assessment

    M.C.T. Bloemen; P.P.M. van Zuijlen; E. Middelkoop


    Introduction: Assessment of the take of split-skin graft and the rate of epithelialisation are important parameters in burn surgery. Such parameters are normally estimated by the clinician in a bedside procedure. This study investigates whether this subjective assessment is reliable for graft take a

  13. Fatigue Reliability under Random Loads

    Talreja, R.


    , with the application of random loads, the initial homogeneous distribution of strength changes to a two-component distribution, reflecting the two-stage fatigue damage. In the crack initiation stage, the strength increases initially and then decreases, while an abrupt decrease of strength is seen in the crack...... propagation stage. The consequences of this behaviour on the fatigue reliability are discussed....

  14. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  15. The Reliability of College Grades

    Beatty, Adam S.; Walmsley, Philip T.; Sackett, Paul R.; Kuncel, Nathan R.; Koch, Amanda J.


    Little is known about the reliability of college grades relative to how prominently they are used in educational research, and the results to date tend to be based on small sample studies or are decades old. This study uses two large databases (N > 800,000) from over 200 educational institutions spanning 13 years and finds that both first-year…

  16. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  17. Inflection points for network reliability

    Brown, J.I.; Koç, Y.; Kooij, R.E.


    Given a finite, undirected graph G (possibly with multiple edges), we assume that the vertices are operational, but the edges are each independently operational with probability p. The (all-terminal) reliability, Rel(G,p), of G is the probability that the spanning subgraph of operational edges is co

  18. Becoming a high reliability organization.

    Christianson, Marlys K; Sutcliffe, Kathleen M; Miller, Melissa A; Iwashyna, Theodore J


    Aircraft carriers, electrical power grids, and wildland firefighting, though seemingly different, are exemplars of high reliability organizations (HROs)--organizations that have the potential for catastrophic failure yet engage in nearly error-free performance. HROs commit to safety at the highest level and adopt a special approach to its pursuit. High reliability organizing has been studied and discussed for some time in other industries and is receiving increasing attention in health care, particularly in high-risk settings like the intensive care unit (ICU). The essence of high reliability organizing is a set of principles that enable organizations to focus attention on emergent problems and to deploy the right set of resources to address those problems. HROs behave in ways that sometimes seem counterintuitive--they do not try to hide failures but rather celebrate them as windows into the health of the system, they seek out problems, they avoid focusing on just one aspect of work and are able to see how all the parts of work fit together, they expect unexpected events and develop the capability to manage them, and they defer decision making to local frontline experts who are empowered to solve problems. Given the complexity of patient care in the ICU, the potential for medical error, and the particular sensitivity of critically ill patients to harm, high reliability organizing principles hold promise for improving ICU patient care.

  19. Azurin for Biomolecular Electronics: a Reliability Study

    Bramanti, Alessandro; Pompa, Pier Paolo; Maruccio, Giuseppe; Calabi, Franco; Arima, Valentina; Cingolani, Roberto; Corni, Stefano; Di Felice, Rosa; De Rienzo, Francesca; Rinaldi, Ross


    The metalloprotein azurin, used in biomolecular electronics, is investigated with respect to its resilience to high electric fields and ambient conditions, which are crucial reliability issues. Concerning the effect of electric fields, two models of different complexity agree indicating an unexpectedly high robustness. Experiments in device-like conditions confirm that no structural modifications occur, according to fluorescence spectra, even after a 40-min exposure to tens of MV/m. Ageing is then investigated experimentally, at ambient conditions and without field, over several days. Only a small conformational rearrangement is observed in the first tens of hours, followed by an equilibrium state.

  20. Observer's observables. Residual diffeomorphisms

    Duch, Paweł; Świeżewski, Jedrzej


    We investigate the fate of diffeomorphisms when the radial gauge is imposed in canonical general relativity. As shown elsewhere, the radial gauge is closely related to the observer's observables. These observables are invariant under a large subgroup of diffeomorphisms which results in their usefulness for canonical general relativity. There are, however, some diffeomorphisms, called residual diffeomorphisms, which might be "observed" by the observer as they do not preserve her observables. The present paper is devoted to the analysis of these diffeomorphisms in the case of the spatial and spacetime radial gauges. Although the residual diffeomorphisms do not form a subgroup of all diffeomorphisms, we show that their induced action in the phase space does form a group. We find the generators of the induced transformations and compute the structure functions of the algebras they form. The obtained algebras are deformations of the algebra of the Euclidean group and the algebra of the Poincar\\'e group in the spat...

  1. Inter-rater reliability of nursing home quality indicators in the U.S

    Roy Jason


    Full Text Available Abstract Background In the US, Quality Indicators (QI's profiling and comparing the performance of hospitals, health plans, nursing homes and physicians are routinely published for consumer review. We report the results of the largest study of inter-rater reliability done on nursing home assessments which generate the data used to derive publicly reported nursing home quality indicators. Methods We sampled nursing homes in 6 states, selecting up to 30 residents per facility who were observed and assessed by research nurses on 100 clinical assessment elements contained in the Minimum Data Set (MDS and compared these with the most recent assessment in the record done by facility nurses. Kappa statistics were generated for all data items and derived for 22 QI's over the entire sample and for each facility. Finally, facilities with many QI's with poor Kappa levels were compared to those with many QI's with excellent Kappa levels on selected characteristics. Results A total of 462 facilities in 6 states were approached and 219 agreed to participate, yielding a response rate of 47.4%. A total of 5758 residents were included in the inter-rater reliability analyses, around 27.5 per facility. Patients resembled the traditional nursing home resident, only 43.9% were continent of urine and only 25.2% were rated as likely to be discharged within the next 30 days. Results of resident level comparative analyses reveal high inter-rater reliability levels (most items >.75. Using the research nurses as the "gold standard", we compared composite quality indicators based on their ratings with those based on facility nurses. All but two QI's have adequate Kappa levels and 4 QI's have average Kappa values in excess of .80. We found that 16% of participating facilities performed poorly (Kappa .75 on 12 or more QI's. No facility characteristics were related to reliability of the data on which Qis are based. Conclusion While a few QI's being used for public reporting

  2. Power Quality and Reliability Project

    Attia, John O.


    One area where universities and industry can link is in the area of power systems reliability and quality - key concepts in the commercial, industrial and public sector engineering environments. Prairie View A&M University (PVAMU) has established a collaborative relationship with the University of'Texas at Arlington (UTA), NASA/Johnson Space Center (JSC), and EP&C Engineering and Technology Group (EP&C) a small disadvantage business that specializes in power quality and engineering services. The primary goal of this collaboration is to facilitate the development and implementation of a Strategic Integrated power/Systems Reliability and Curriculum Enhancement Program. The objectives of first phase of this work are: (a) to develop a course in power quality and reliability, (b) to use the campus of Prairie View A&M University as a laboratory for the study of systems reliability and quality issues, (c) to provide students with NASA/EPC shadowing and Internship experience. In this work, a course, titled "Reliability Analysis of Electrical Facilities" was developed and taught for two semesters. About thirty seven has benefited directly from this course. A laboratory accompanying the course was also developed. Four facilities at Prairie View A&M University were surveyed. Some tests that were performed are (i) earth-ground testing, (ii) voltage, amperage and harmonics of various panels in the buildings, (iii) checking the wire sizes to see if they were the right size for the load that they were carrying, (iv) vibration tests to test the status of the engines or chillers and water pumps, (v) infrared testing to the test arcing or misfiring of electrical or mechanical systems.

  3. Reliability of Arctic offshore installations

    Bercha, F.G. [Bercha Group, Calgary, AB (Canada); Gudmestad, O.T. [Stavanger Univ., Stavanger (Norway)]|[Statoil, Stavanger (Norway)]|[Norwegian Univ. of Technology, Stavanger (Norway); Foschi, R. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Civil Engineering; Sliggers, F. [Shell International Exploration and Production, Rijswijk (Netherlands); Nikitina, N. [VNIIG, St. Petersburg (Russian Federation); Nevel, D.


    Life threatening and fatal failures of offshore structures can be attributed to a broad range of causes such as fires and explosions, buoyancy losses, and structural overloads. This paper addressed the different severities of failure types, categorized as catastrophic failure, local failure or serviceability failure. Offshore tragedies were also highlighted, namely the failures of P-36, the Ocean Ranger, the Piper Alpha, and the Alexander Kieland which all resulted in losses of human life. P-36 and the Ocean Ranger both failed ultimately due to a loss of buoyancy. The Piper Alpha was destroyed by a natural gas fire, while the Alexander Kieland failed due to fatigue induced structural failure. The mode of failure was described as being the specific way in which a failure occurs from a given cause. Current reliability measures in the context of offshore installations only consider the limited number of causes such as environmental loads. However, it was emphasized that a realistic value of the catastrophic failure probability should consider all credible causes of failure. This paper presented a general method for evaluating all credible causes of failure of an installation. The approach to calculating integrated reliability involves the use of network methods such as fault trees to combine the probabilities of all factors that can cause a catastrophic failure, as well as those which can cause a local failure with the potential to escalate to a catastrophic failure. This paper also proposed a protocol for setting credible reliability targets such as the consideration of life safety targets and escape, evacuation, and rescue (EER) success probabilities. A set of realistic reliability targets for both catastrophic and local failures for representative safety and consequence categories associated with offshore installations was also presented. The reliability targets were expressed as maximum average annual failure probabilities. The method for converting these annual

  4. Intratester and intertester reliability of the Cybex electronic digital inclinometer (EDI-320) for measurement of active neck flexion and extension in healthy subjects.

    Tousignant, M; Boucher, N; Bourbonnais, J; Gravelle, T; Quesnel, M; Brosseau, L


    This study examined the intratester and intertester reliability of the electronic digital goniometer EDI-320 for the measurement of active neck flexion and extension in healthy subjects. In the context of evidence-based practice, the EDI-320 instrument has the potential to improve patient assessment, provide a clearer picture of patient progress, and confirm the effectiveness of physiotherapy interventions. However, the psychometric properties of the EDI-320 have not yet been documented for cervical spine range of motion. Forty-four individuals with no known history of cervical disorder within the three months prior to the testing, voluntarily consented to participate in this study. Repeated measurements with the EDI-320 were taken by two trained testers (TH1 and TH2) and data were recorded by two separate observers. Subjects performed a standardized warm-up. Testers were required to repeat palpation of bony landmarks prior to each trial. Measurements were taken at the end-range of active cervical flexion and extension for each subject. Both testers measured each subject twice. The intraclass correlation coefficients (ICC) were derived from one-way ANOVA for intratester reliability and a two-way ANOVA for intertester reliability. Paired t -tests were then applied to verify for systematic error. Moderate intratester reliability was found for both testers for flexion (TH1: ICC=0.77; 95% CI: 0.62-0.87; TH2: ICC=0.77; 95% CI: 0.58-0.87). As for extension, high intratester reliability was found for TH1 (ICC=0.79; 95% CI: 0.65-0.88) and moderate for TH2: (ICC=0.83; 95% CI: 0.63-0.92). Intertester reliability results showed a moderate reliability for both flexion and extension (ICC=0.66; 95% CI: 0.24-0.84) on the first trial. On the second trial, reliability was moderate for flexion (ICC=0.73; 95% CI: 0.53-0.85) and high for extension (ICC=0.80; 95% CI: 0.64-0.89). The t -test analysis revealed the inclusion of systematic error by Tester 2 for intratester reliability

  5. The reliability of assessing rotation of teeth on photographed study casts

    Vermeulen, F.M.J.; Aartman, I.H.A.; Kuitert, R.; Zentner, A.


    Objective: To examine the intra- and interexaminer reliability of assessing rotation of teeth on photographed study casts. In addition, the reliability parameters of two examiners scoring in mutual consultation were compared with the reliability parameters by one observer. Materials and Methods: Sta

  6. Climate Record Books

    National Oceanic and Atmospheric Administration, Department of Commerce — Climate Record Books contain daily, monthly, seasonal, and annual averages, extremes, or occurrences. Most data are sequential by period of record 1871-1910,...

  7. Iraq Radiosonde Launch Records

    National Oceanic and Atmospheric Administration, Department of Commerce — Iraqi upper air records loaned to NCDC from the Air Force 14th Weather Squadron. Scanned notebooks containing upper air radiosonde launch records and data. Launches...

  8. Daily Weather Records

    National Oceanic and Atmospheric Administration, Department of Commerce — These daily weather records were compiled from a subset of stations in the Global Historical Climatological Network (GHCN)-Daily dataset. A weather record is...

  9. Transient Voltage Recorder

    Medelius, Pedro J. (Inventor); Simpson, Howard J. (Inventor)


    A voltage transient recorder can detect lightning induced transient voltages. The recorder detects a lightning induced transient voltage and adjusts input amplifiers to accurately record transient voltage magnitudes. The recorder stores voltage data from numerous monitored channels, or devices. The data is time stamped and can be output in real time, or stored for later retrieval. The transient recorder, in one embodiment, includes an analog-to-digital converter and a voltage threshold detector. When an input voltage exceeds a pre-determined voltage threshold, the recorder stores the incoming voltage magnitude and time of arrival. The recorder also determines if its input amplifier circuits clip the incoming signal or if the incoming signal is too low. If the input data is clipped or too low, the recorder adjusts the gain of the amplifier circuits to accurately acquire subsequent components of the lightning induced transients.

  10. Exact reliability quantification of highly reliable systems with maintenance

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)


    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  11. Interpreting land records

    Wilson, Donald A


    Base retracement on solid research and historically accurate interpretation Interpreting Land Records is the industry's most complete guide to researching and understanding the historical records germane to land surveying. Coverage includes boundary retracement and the primary considerations during new boundary establishment, as well as an introduction to historical records and guidance on effective research and interpretation. This new edition includes a new chapter titled "Researching Land Records," and advice on overcoming common research problems and insight into alternative resources wh

  12. Record values from a family of J-shaped distributions

    Ahmad A. Zghoul


    Full Text Available A family of J-shaped distributions has applications in life testing modeling. In this paper we study record values from this family of distributions. Based on lower records, recurrence relations and bounds as well as expressions for moments and product moments of record values are obtained, the maximum likelihood estimator of the shape parameter is derived and shown to be consistent, sufficient, complete and UMVU estimator. In addition, an application in reliability is given.

  13. Managing electronic records

    McLeod, Julie


    For records management courses, this book covers the theory and practice of managing electronic records as business and information assets. It focuses on the strategies, systems and procedures necessary to ensure that electronic records are appropriately created, captured, organized and retained over time to meet business and legal requirements.

  14. Recorder Resources, Part 2

    Marshall, Herbert D.; VanHaaren, Peg


    This article provides teaching tips and materials related to recorder lesson. Teaching Recorder in the Music Classroom, by Fred Kersten, compiles more current recorder information than any other resource. In planning instruction, the major determining factor seems to be Rote or Note. This allows instructors to take familiar repertoire that…

  15. Public Records 1995.

    Pritchard-Schoch, Teresa


    Examines developments among public record information providers, including a shift from file acquisition to entire company acquisition. Highlights include a table of remote access to public records by state; pricing information; privacy issues; and information about the three main companies offering access to public records: LEXIS, CDB Infotek,…

  16. The development of a reliable amateur boxing performance analysis template.

    Thomson, Edward; Lamb, Kevin; Nicholas, Ceri


    The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.

  17. Water Current Observations

    National Oceanic and Atmospheric Administration, Department of Commerce — Tidal, river and ocean current observations collected by the U.S. Coast and Geodetic Survey. Period of record is late 1800s to mid-1980s.

  18. NWS Corrections to Observations

    National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...

  19. Monthly Weather Observations

    National Oceanic and Atmospheric Administration, Department of Commerce — Surface Weather Observation 1001 Forms is a set of historical manuscript records for the period 1893-1948. The collection includes two very similar form types: Form...

  20. Test-retest reliability of a single-channel, wireless EEG system.

    Rogers, Jeffrey M; Johnstone, Stuart J; Aminov, Anna; Donnelly, James; Wilson, Peter H


    Recording systems to acquire electroencephalogram (EEG) data are traditionally lab-based. However, there are shortcomings to this method, and the ease of use and portability of emerging wireless EEG technologies offer a promising alternative. A previous validity study demonstrated data derived from a single-channel, wireless system (NeuroSky ThinkGear, San Jose, California) is comparable to EEG recorded from conventional lab-based equipment. The current study evaluated the reliability of this portable system using test-retest and reliable change analyses. Relative power (RP) of delta, theta, alpha, and beta frequency bands was derived from EEG data obtained from a single electrode over FP1 in 19 healthy youth (10-17years old), 21 healthy adults (18-28years old), and 19 healthy older adults (55-79years old), during eyes-open, eyes-closed, auditory oddball, and visual n-back conditions. Intra-class correlations (ICCs) and Coefficients of Repeatability (CRs) were calculated from RP data re-collected one-day, one-week, and one-month later. Participants' levels of mood and attention were consistent across sessions. Eyes-closed resting EEG measurements using the portable device were reproducible (ICCs 0.76-0.85) at short and longer retest intervals in all three participant age groups. While still of at least fair reliability (ICCs 0.57-0.85), EEG obtained during eyes-open paradigms was less stable, and any change observed over time during these testing conditions can be interpreted utilizing the CR values provided. Combined with existing validity data, these findings encourage application of the portable EEG system for the study of brain function.