Operational Procedures for Optimized Reliability and Component Life Estimator (ORACLE)
1975-12-01
TOTAL FAILURE RATE AND TTF Figure 1. Block diagram of the reliability predicition program routines (cross hatched boxes), the required inputs and the...in some signifi- cant way, describe and/or identify the particular piece of equipment associated with the parts or module. The maintenance of a
Procedures for reliable estimation of viral fitness from time-series data
Bonhoeffer, S.; Barbour, A.D.; Boer, R.J. de
2002-01-01
In order to develop a better understanding of the evolutionary dynamics of HIV drug resistance, it is necessary to quantify accurately the in vivo fitness costs of resistance mutations. However, the reliable estimation of such fitness costs is riddled with both theoretical and experimental difficult
Reliability Estimating Procedures for Electric and Thermochemical Propulsion Systems. Volume 2
1977-02-01
the number of operating cycles. For special cases (such as engine valves, injectors , thrust chambers) where design approaches were varied, the design...Heaters, External HET Thruster 56. Injector (including Trim Orifice) 56.1 Plugging IMP 56.2 Fracture IMF56.3 Injector Seal Leak ISL 56.4 Injector ...Southern California Investigation of the Feasibility of the Delphi Techniqtue for Estimating Risk Analysis Parameters, University of Southern California
Reliability based fatigue design and maintenance procedures
Hanagud, S.
1977-01-01
A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.
Reliability of procedures used for scaling loudness
Jesteadt, Walt; Joshi, Suyash Narendra
2013-01-01
(ME, MP, CLS; MP, ME, CLS; CLS, ME, MP; CLS, MP, ME), and the order was reversed on the second visit. This design made it possible to compare the reliability of estimates of the slope of the loudness function across procedures in the same listeners. The ME data were well fitted by an inflected...... exponential (INEX) function, but a modified power law was used to obtain slope estimates for both ME and MP. ME and CLS were more reliable than MP. CLS results were consistent across groups, but ME and MP results differed across groups in a way that suggested influence of experience with CLS. Although CLS...... results were the most reproducible, they do not provide direct information about the slope of the loudness function because the numbers assigned to CLS categories are arbitrary. This problem can be corrected by using data from the other procedures to assign numbers that are proportional to loudness...
Estimation of Bridge Reliability Distributions
Thoft-Christensen, Palle
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...
Reliability estimation using kriging metamodel
Cho, Tae Min; Ju, Byeong Hyeon; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.
DISTRIBUTED MONITORING SYSTEM RELIABILITY ESTIMATION WITH CONSIDERATION OF STATISTICAL UNCERTAINTY
Yi Pengxing; Yang Shuzi; Du Runsheng; Wu Bo; Liu Shiyuan
2005-01-01
Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.
HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES
Ronald L. Boring; David I. Gertman; Katya Le Blanc
2011-09-01
This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.
Accident Sequence Evaluation Program: Human reliability analysis procedure
Swain, A.D.
1987-02-01
This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.
Bayesian Missile System Reliability from Point Estimates
2014-10-28
OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Bayesian Missile System Reliability from Point Estimates 5a. CONTRACT...Principle (MEP) to convert point estimates to probability distributions to be used as priors for Bayesian reliability analysis of missile data, and...illustrate this approach by applying the priors to a Bayesian reliability model of a missile system. 15. SUBJECT TERMS priors, Bayesian , missile
Reliability Estimates for Power Supplies
Lee C. Cadwallader; Peter I. Petersen
2005-09-01
Failure rates for large power supplies at a fusion facility are critical knowledge needed to estimate availability of the facility or to set priorties for repairs and spare components. A study of the "failure to operate on demand" and "failure to continue to operate" failure rates has been performed for the large power supplies at DIII-D, which provide power to the magnet coils, the neutral beam injectors, the electron cyclotron heating systems, and the fast wave systems. When one of the power supplies fails to operate, the research program has to be either temporarily changed or halted. If one of the power supplies for the toroidal or ohmic heating coils fails, the operations have to be suspended or the research is continued at de-rated parameters until a repair is completed. If one of the power supplies used in the auxiliary plasma heating systems fails the research is often temporarily changed until a repair is completed. The power supplies are operated remotely and repairs are only performed when the power supplies are off line, so that failure of a power supply does not cause any risk to personnel. The DIII-D Trouble Report database was used to determine the number of power supply faults (over 1,700 reports), and tokamak annual operations data supplied the number of shots, operating times, and power supply usage for the DIII-D operating campaigns between mid-1987 and 2004. Where possible, these power supply failure rates from DIII-D will be compared to similar work that has been performed for the Joint European Torus equipment. These independent data sets support validation of the fusion-specific failure rate values.
A new simulation estimator of system reliability
Sheldon M. Ross
1994-01-01
Full Text Available A basic identity is proven and applied to obtain new simulation estimators concerning (a system reliability, (b a multi-valued system. We show that the variance of this new estimator is often of the order α2 when the usual raw estimator has variance of the order α and α is small. We also indicate how this estimator can be combined with standard variance reduction techniques of antithetic variables, stratified sampling and importance sampling.
Mission Reliability Estimation for Repairable Robot Teams
Stephen B. Stancliff
2008-11-01
Full Text Available Many of the most promising applications for mobile robots require very high reliability. The current generation of mobile robots is, for the most part, highly unreliable. The few mobile robots that currently demonstrate high reliability achieve this reliability at a high financial cost. In order for mobile robots to be more widely used, it will be necessary to find ways to provide high mission reliability at lower cost. Comparing alternative design paradigms in a principled way requires methods for comparing the reliability of different robot and robot team configurations. In this paper, we present the first principled quantitative method for performing mission reliability estimation for mobile robot teams. We also apply this method to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Using conservative estimates of the cost-reliability relationship, our results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares.
Robust estimation procedure in panel data model
Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.
The reliability of DSM impact estimates
Vine, E.L. [Lawrence Berkeley Lab., CA (United States); Kushler, M.G. [Michigan Public Service Commission, Lansing, MI (United States)
1995-05-01
Demand-side management (DSM) critics continue to question the reliability of DSM program savings, and therefore, the need for funding such programs. In this paper, the authors examine the issues underlying the discussion of reliability of DSM program savings (e.g., bias and precision) and compare the levels of precision of DSM impact estimates for three utilities. Overall, the precision results from all three companies appear quite similar and, for the most part, demonstrate reasonably good precision levels around DSM savings estimate. The conclude by recommending activities for program managers and evaluators for increasing the understanding of the factors leading to DSM uncertainty and for reducing the level of DSM uncertainty.
Adaptive Response Surface Techniques in Reliability Estimation
Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard
1993-01-01
Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces deter...
Reliability estimates for flawed mortar projectile bodies
Cordes, J.A. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)], E-mail: jennifer.cordes@us.army.mil; Thomas, J.; Wong, R.S.; Carlucci, D. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)
2009-12-15
The Army routinely screens mortar projectiles for defects in safety-critical parts. In 2003, several lots of mortar projectiles had a relatively high defect rate, 0.24%. Before releasing the projectiles, the Army reevaluated the chance of a safety-critical failure. Limit state functions and Monte Carlo simulations were used to estimate reliability. Measured distributions of wall thickness, defect rate, material strength, and applied loads were used with calculated stresses to estimate the probability of failure. The results predicted less than one failure in one million firings. As of 2008, the mortar projectiles have been used without any safety-critical incident.
Analytic Estimation of Standard Error and Confidence Interval for Scale Reliability.
Raykov, Tenko
2002-01-01
Proposes an analytic approach to standard error and confidence interval estimation of scale reliability with fixed congeneric measures. The method is based on a generally applicable estimator stability evaluation procedure, the delta method. The approach, which combines wide-spread point estimation of composite reliability in behavioral scale…
Estimating a municipal water supply reliability
O.G. Okeola
2015-12-01
Full Text Available The availability and adequacy of water in a river basin determine the design of water resources projects such as water supply. There is a further need to regularly appraise availability of such resource for municipality at a distant future to help in articulating contingent plan to handle its vulnerability. This paper attempts to empirically determine the reliability of water resource for a municipal water supply. An approach was first developed to estimate municipality water demand that lack socioeconometric data using a purpose-specific model. Hydrological assessment of river Oyun basin was carried out using Markov model and sequent peak analysis to determine the reliability extent for the future demand need. The two models were then applied to Offa municipality in Kwara state, Nigeria. The finding revealed the reliability and adequacy of the resource up till year 2020. The need to start exploring a well-coordinated conjunctive use of resources is recommended. The study can serve as an organized baseline for future work that will consider physiographic characteristics of the basin and climatic dynamics. The findings can be a vital input into the demand management process for long-term sustainable water supply of the town and by extension to urban township with similar characteristic.
Lower bounds to the reliabilities of factor score estimators
Hessen, D.J.|info:eu-repo/dai/nl/256041717
2017-01-01
Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score
Estimation of the Reliability of Distributed Applications
Marian Pompiliu CRISTESCU; Laurentiu CIOVICA
2010-01-01
In this paper the reliability is presented as an important feature for use in mission-critical distributed applications. Certain aspects of distributed systems make the requested level of reliability more difficult. An obvious benefit of distributed systems is that they serve the global business and social environment in which we live and work. Another benefit is that they can improve the quality of services, in terms of reliability, availability and performance, for the complex systems. The ...
Reliability Testing Procedure for MEMS IMUs Applied to Vibrating Environments
Aurelio Somà
2010-01-01
Full Text Available The diffusion of micro electro-mechanical systems (MEMS technology applied to navigation systems is rapidly increasing, but currently, there is a lack of knowledge about the reliability of this typology of devices, representing a serious limitation to their use in aerospace vehicles and other fields with medium and high requirements. In this paper, a reliability testing procedure for inertial sensors and inertial measurement units (IMU based on MEMS for applications in vibrating environments is presented. The sensing performances were evaluated in terms of signal accuracy, systematic errors, and accidental errors; the actual working conditions were simulated by means of an accelerated dynamic excitation. A commercial MEMS-based IMU was analyzed to validate the proposed procedure. The main weaknesses of the system have been localized by providing important information about the relationship between the reliability levels of the system and individual components.
An Integrated Procedure for Bayesian Reliability Inference Using MCMC
Jing Lin
2014-01-01
Full Text Available The recent proliferation of Markov chain Monte Carlo (MCMC approaches has led to the use of the Bayesian inference in a wide variety of fields. To facilitate MCMC applications, this paper proposes an integrated procedure for Bayesian inference using MCMC methods, from a reliability perspective. The goal is to build a framework for related academic research and engineering applications to implement modern computational-based Bayesian approaches, especially for reliability inferences. The procedure developed here is a continuous improvement process with four stages (Plan, Do, Study, and Action and 11 steps, including: (1 data preparation; (2 prior inspection and integration; (3 prior selection; (4 model selection; (5 posterior sampling; (6 MCMC convergence diagnostic; (7 Monte Carlo error diagnostic; (8 model improvement; (9 model comparison; (10 inference making; (11 data updating and inference improvement. The paper illustrates the proposed procedure using a case study.
A Note on Structural Equation Modeling Estimates of Reliability
Yang, Yanyun; Green, Samuel B.
2010-01-01
Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…
Hardware and software reliability estimation using simulations
Swern, Frederic L.
1994-01-01
The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
Simulator for Software Project Reliability Estimation
Sanjana,
2011-07-01
Full Text Available Several models are there for software development processes, each describing approaches to a variety of tasks or activities that take place during the process. Without project management, softwareprojects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.IEEE defines reliability as “the ability of a system to perform its required function under stated conditions for a specified period of time. To most software project managers, reliability is equated to correctness that is number of bugs found and fixed. The purpose is to develop a simulator forestimating the reliability of the software project using PERT approach keeping in view the criticality index of each task.
MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.
Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne
2014-01-01
When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement
Reliability estimation in a multilevel confirmatory factor analysis framework.
Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J
2014-03-01
Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.
PROCEDURE FOR ESTIMATING PERMANENT TOTAL ENCLOSURE COSTS
The paper discusses a procedure for estimating permanent total enclosure (PTE) costs. (NOTE: Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use PTEs, enclosures that mee...
Reliabilities of genomic estimated breeding values in Danish Jersey
Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng;
2012-01-01
In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...... index pre-selection only. Averaged across traits, the estimates of reliability of DGVs ranged from 0.20 for validation on the most recent 3 years of bulls and up to 0.42 for expected reliabilities. Reliabilities from the cross-validation were on average 0.24. For the individual traits, the reliability...
A reliable procedure to predict salt precipitation in pure phases
P. S. O. Beltrão
2010-03-01
Full Text Available This article proposes a new procedure to compute solid-liquid equilibrium in electrolyte systems that may form pure solid phases at a given temperature, pressure, and global composition. The procedure combines three sub-procedures: phase stability test, minimization of the Gibbs free energy with a stoichiometric formulation of the salt-forming reactions to compute phase splitting, and a phase elimination test. After the phase splitting calculation for a system configuration that has a certain number of phases, the phase stability test establishes whether including an additional phase will reduce the Gibbs free energy further. The criteria used for phase stability may lead, in some cases, to the premature inclusion of phases that should be absent from the final solution but, if this happens, the phase elimination sub-procedure removes them. It is possible to use the procedure with several excess Gibbs free energy models for liquid phase behavior. The procedure has proven to be reliable and fast and the results are in good agreement with literature data.
Reliability Estimation for Double Containment Piping
L. Cadwallader; T. Pinna
2012-08-01
Double walled or double containment piping is considered for use in the ITER international project and other next-generation fusion device designs to provide an extra barrier for tritium gas and other radioactive materials. The extra barrier improves confinement of these materials and enhances safety of the facility. This paper describes some of the design challenges in designing double containment piping systems. There is also a brief review of a few operating experiences of double walled piping used with hazardous chemicals in different industries. This paper recommends approaches for the reliability analyst to use to quantify leakage from a double containment piping system in conceptual and more advanced designs. The paper also cites quantitative data that can be used to support such reliability analyses.
A standardized motor threshold estimation procedure for transcranial magnetic stimulation research
Schutter, D.J.L.G.; Honk, E.J. van
2006-01-01
Objectives: To introduce and to test a simple standardized motor threshold (MT) estimation procedure for transcranial magnetic stimulation (TMS) research. Methods: A 5-step MT estimation procedure was introduced, and interestimator reliability was tested by comparing MTs as determined by an experien
A Latent Class Approach to Estimating Test-Score Reliability
van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas
2011-01-01
This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…
IRT-Estimated Reliability for Tests Containing Mixed Item Formats
Shu, Lianghua; Schwarz, Richard D.
2014-01-01
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-01-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with correspond
Singularity of Some Software Reliability Models and Parameter Estimation Method
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
Estimation procedures affect the center of pressure frequency analysis.
Vieira, T M M; Oliveira, L F; Nadal, J
2009-07-01
Even though frequency analysis of body sway is widely applied in clinical studies, the lack of standardized procedures concerning power spectrum estimation may provide unreliable descriptors. Stabilometric tests were applied to 35 subjects (20-51 years, 54-95 kg, 1.6-1.9 m) and the power spectral density function was estimated for the anterior-posterior center of pressure time series. The median frequency was compared between power spectra estimated according to signal partitioning, sampling rate, test duration, and detrending methods. The median frequency reliability for different test durations was assessed using the intraclass correlation coefficient. When increasing number of segments, shortening test duration or applying linear detrending, the median frequency values increased significantly up to 137%. Even the shortest test duration provided reliable estimates as observed with the intraclass coefficient (0.74-0.89 confidence interval for a single 20-s test). Clinical assessment of balance may benefit from a standardized protocol for center of pressure spectral analysis that provides an adequate relationship between resolution and variance. An algorithm to estimate center of pressure power density spectrum is also proposed.
Estimation procedures affect the center of pressure frequency analysis
T.M.M. Vieira
2009-07-01
Full Text Available Even though frequency analysis of body sway is widely applied in clinical studies, the lack of standardized procedures concerning power spectrum estimation may provide unreliable descriptors. Stabilometric tests were applied to 35 subjects (20-51 years, 54-95 kg, 1.6-1.9 m and the power spectral density function was estimated for the anterior-posterior center of pressure time series. The median frequency was compared between power spectra estimated according to signal partitioning, sampling rate, test duration, and detrending methods. The median frequency reliability for different test durations was assessed using the intraclass correlation coefficient. When increasing number of segments, shortening test duration or applying linear detrending, the median frequency values increased significantly up to 137%. Even the shortest test duration provided reliable estimates as observed with the intraclass coefficient (0.74-0.89 confidence interval for a single 20-s test. Clinical assessment of balance may benefit from a standardized protocol for center of pressure spectral analysis that provides an adequate relationship between resolution and variance. An algorithm to estimate center of pressure power density spectrum is also proposed.
Reliability-based optimum inspection and maintenance procedures. [for engines
Nanagud, S.; Uppaluri, B.
1975-01-01
The development of reliability-based optimum inspection and maintenance schedules for engines needs an understanding of the fatigue behavior of the engines. Critical areas of the engine structure prone to fatigue damage are usually identified beforehand or after the fleet has been put into operation. In these areas, fatigue cracks initiate after several flight hours, and these cracks grow in length until failure takes place when these cracks attain the critical lengths. Crack initiation time and its growth rate are considered to be random variables. Usually, the inspection (fatigue) or test data from similar engines are used as prior distributions. The existing state-of-the-art is to ignore the different lengths of cracks obserbed at various inspections and to consider only the fact that a crack existed (or did not exist) at the time of inspection. In this paper, a procedure has been developed to obtain the probability of finding a crack of a given size at a certain time if the probability distributions for crack initiation and rates of growth are known. Application of the developed stochastic models to devise optimum procedures for inspection and maintenance are also discussed.
Reliability-based optimum inspection and maintenance procedures. [for engines
Nanagud, S.; Uppaluri, B.
1975-01-01
The development of reliability-based optimum inspection and maintenance schedules for engines needs an understanding of the fatigue behavior of the engines. Critical areas of the engine structure prone to fatigue damage are usually identified beforehand or after the fleet has been put into operation. In these areas, fatigue cracks initiate after several flight hours, and these cracks grow in length until failure takes place when these cracks attain the critical lengths. Crack initiation time and its growth rate are considered to be random variables. Usually, the inspection (fatigue) or test data from similar engines are used as prior distributions. The existing state-of-the-art is to ignore the different lengths of cracks obserbed at various inspections and to consider only the fact that a crack existed (or did not exist) at the time of inspection. In this paper, a procedure has been developed to obtain the probability of finding a crack of a given size at a certain time if the probability distributions for crack initiation and rates of growth are known. Application of the developed stochastic models to devise optimum procedures for inspection and maintenance are also discussed.
40 CFR 98.385 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. You must follow the procedures for estimating missing data in § 98... estimating missing data for petroleum products in § 98.395 also applies to coal-to-liquid products....
SA BASED SOFTWARE DEPLOYMENT RELIABILITY ESTIMATION CONSIDERING COMPONENT DEPENDENCE
Su Xihong; Liu Hongwei; Wu Zhibo; Yang Xiaozong; Zuo Decheng
2011-01-01
Reliability is one of the most critical properties of software system.System deployment architecture is the allocation of system software components on host nodes.Software Architecture (SA)based software deployment models help to analyze reliability of different deployments.Though many approaches for architecture-based reliability estimation exist,little work has incorporated the influence of system deployment and hardware resources into reliability estimation.There are many factors influencing system deployment.By translating the multi-dimension factors into degree matrix of component dependence,we provide the definition of component dependence and propose a method of calculating system reliability of deployments.Additionally,the parameters that influence the optimal deployment may change during system execution.The existing software deployment architecture may be ill-suited for the given environment,and the system needs to be redeployed to improve reliability.An approximate algorithm,A*_D,to increase system reliability is presented.When the number of components and host nodes is relative large,experimental results show that this algorithm can obtain better deployment than stochastic and greedy algorithms.
A simple procedure for estimating soil porosity
Emmet-Booth, Jeremy; Forristal, Dermot; Fenton, Owen; Holden, Nick
2016-04-01
Soil degradation from mismanagement is of international concern. Simple, accessible tools for rapidly assessing impacts of soil management are required. Soil structure is a key component of soil quality and porosity is a useful indicator of structure. We outline a version of a procedure described by Piwowarczyk et al. (2011) used to estimate porosity of samples taken during a soil quality survey of 38 sites across Ireland as part of the Government funded SQUARE (Soil Quality Assessment Research) project. This required intact core (r = 2.5 cm, H = 5cm) samples taken at 5-10 cm and 10-20 cm depth, to be covered with muslin cloth at one end and secured with a jubilee clip. Samples were saturated in sealable water tanks for ≈ 64 hours, then allowed to drain by gravity for 24 hours, at which point Field Capacity (F.C.) was assumed to have been reached, followed by oven drying with weight determined at each stage. This allowed the calculation of bulk density and the estimation of water content at saturation and following gravitational drainage, thus total and functional porosity. The assumption that F.C. was reached following 24 hours of gravitational drainage was based on the Soil Moisture Deficit model used in Ireland to predict when soils are potentially vulnerable to structural damage and used nationally as a management tool. Preliminary results indicate moderately strong, negative correlations between estimated total porosity at 5-10 cm and 10-20 cm depth (rs = -0.7, P soil quality scores of the Visual Evaluation of Soil Structure (VESS) method which was conducted at each survey site. Estimated functional porosity at 5-10 cm depth was found to moderately, negatively correlate with VESS scores (rs = - 0.5, P indicating porosity of a large quantity of samples taken at numerous sites or if done periodically, temporal changes in porosity at a field scale, indicating the impacts of soil management. Reference Piwowarczyk, A., Giuliani, G. & Holden, N.M. 2011. Can soil
Maximized Reliability Estimates for Some Research Scales of the MMPI.
Wagner, Edwin E.; And Others
1990-01-01
This study, using data for 200 psychiatric/chemical dependency patients, attempted to justify subscales of the Minnesota Multiphasic Personality Inventory (MMPI). Distributions of all possible split-half correlations for certain research scales of the MMPI revealed negative skewness resulting in spuriously lowered reliability estimates. The scales…
Estimating the Reliability of a Test Containing Multiple Item Formats.
Qualls, Audrey L.
1995-01-01
Classically parallel, tau-equivalently parallel, and congenerically parallel models representing various degrees of part-test parallelism and their appropriateness for tests composed of multiple item formats are discussed. An appropriate reliability estimate for a test with multiple item formats is presented and illustrated. (SLD)
Sequential Bayesian technique: An alternative approach for software reliability estimation
S Chatterjee; S S Alam; R B Misra
2009-04-01
This paper proposes a sequential Bayesian approach similar to Kalman ﬁlter for estimating reliability growth or decay of software. The main advantage of proposed method is that it shows the variation of the parameter over a time, as new failure data become available. The usefulness of the method is demonstrated with some real life data
Parameter estimation and reliable fault detection of electric motors
Dusan PROGOVAC; Le Yi WANG; George YIN
2014-01-01
Accurate model identification and fault detection are necessary for reliable motor control. Motor-characterizing parameters experience substantial changes due to aging, motor operating conditions, and faults. Consequently, motor parameters must be estimated accurately and reliably during operation. Based on enhanced model structures of electric motors that accommodate both normal and faulty modes, this paper introduces bias-corrected least-squares (LS) estimation algorithms that incorporate functions for correcting estimation bias, forgetting factors for capturing sudden faults, and recursive structures for efficient real-time implementation. Permanent magnet motors are used as a benchmark type for concrete algorithm development and evaluation. Algorithms are presented, their properties are established, and their accuracy and robustness are evaluated by simulation case studies under both normal operations and inter-turn winding faults. Implementation issues from different motor control schemes are also discussed.
Objectivity, Reliability, and Validity of Search Engine Count Estimates
Dietmar Janetzko
2008-01-01
Full Text Available Count estimates ("hits" provided by Web search engines have received much attention as a yardstick to measure a variety of phenomena of interest as diverse as, e.g., language statistics, popularity of authors, or similarity between words. Common to these activities is the intention to use Web search engines not only for search but for ad hoc measurement. Using search engine count estimates (SECEs in this way means that a phenomenon of interest, e.g., the popularity of an author, is conceived of as a measurand, and SECEs are taken to be its quantitative measures. However, the data quality of SECEs has not yet been studied systematically, and concerns have been raised against the use of this kind of data. This article examines the data quality of SECEs focusing on classical goodness criteria, i.e., objectivity, reliability, and validity. The results of a series of studies indicate that with the exception of Boolean queries that use disjunction or negation objectivity as well as test-retest reliability and parallel-test reliability of SECEs is good for most types of browsers and search engines examined. Estimation of validity required model development (all-subsets regression revealing satisfying results by using an explorative approach to feature selection. The ﬁndings are discussed in the light of previous objections and perspectives for using Web search count estimates are delineated.
40 CFR 98.425 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. (a) Whenever the quality assurance procedures in § 98.424(a) of this subpart cannot... following missing data procedures shall be followed: (1) A quarterly CO2 mass flow or volumetric flow...
Iterative procedure for camera parameters estimation using extrinsic matrix decomposition
Goshin, Yegor V.; Fursov, Vladimir A.
2016-03-01
This paper addresses the problem of 3D scene reconstruction in cases when the extrinsic parameters (rotation and translation) of the camera are unknown. This problem is both important and urgent because the accuracy of the camera parameters significantly influences the resulting 3D model. A common approach is to determine the fundamental matrix from corresponding points on two views of a scene and then to use singular value decomposition for camera projection matrix estimation. However, this common approach is very sensitive to fundamental matrix errors. In this paper we propose a novel approach in which camera parameters are determined directly from the equations of the projective transformation by using corresponding points on the views. The proposed decomposition allows us to use an iterative procedure for determining the parameters of the camera. This procedure is implemented in two steps: the translation determination and the rotation determination. The experimental results of the camera parameters estimation and 3D scene reconstruction demonstrate the reliability of the proposed approach.
Estimating the reliability of eyewitness identifications from police lineups.
Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William
2016-01-12
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.
Probabilistic confidence for decisions based on uncertain reliability estimates
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
40 CFR 98.315 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. For the petroleum coke input procedure in § 98.313(b), a complete record of all... substitute data value for the missing parameter shall be used in the calculations as specified in...
40 CFR 98.195 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. For the procedure in § 98.193(b)(2), a complete record of all measured parameters... process data or data used for accounting purposes. (b) For missing values related to the CaO and...
40 CFR 98.35 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. Whenever a quality-assured value of a required parameter is... substitute data value for the missing parameter shall be used in the calculations. (a) For all units...
40 CFR 98.45 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data. 98.45 Section 98.45 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... estimating missing data. Follow the applicable missing data substitution procedures in 40 CFR part 75 for...
40 CFR 98.245 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. For missing feedstock flow rates, product flow rates, and carbon contents, use the same procedures as for missing flow rates and carbon contents for fuels as specified in § 98.35....
40 CFR 98.405 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... § 98.405 Procedures for estimating missing data. (a) Whenever a quality-assured value of the quantity... meter malfunctions), a substitute data value for the missing quantity measurement must be used in...
40 CFR 98.155 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. (a) A complete record of all measured parameters used in the GHG...), a substitute data value for the missing parameter shall be used in the calculations, according...
40 CFR 98.415 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. (a) A complete record of all measured parameters used in the GHG... unavailable (e.g., if a meter malfunctions), a substitute data value for the missing parameter shall be...
40 CFR 98.285 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. For the petroleum coke input procedure in § 98.283(b), a complete record of all...) For each missing value of the monthly carbon content of petroleum coke, the substitute data...
40 CFR 98.335 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... missing data. For the carbon input procedure in § 98.333(b), a complete record of all measured parameters... average carbon contents of inputs according to the procedures in § 98.335(b) if data are missing. (b)...
Availability and Reliability of FSO Links Estimated from Visibility
M. Tatarko
2012-06-01
Full Text Available This paper is focused on estimation availability and reliability of FSO systems. Shortcut FSO means Free Space Optics. It is a system which allows optical transmission between two steady points. We can say that it is a last mile communication system. It is an optical communication system, but the propagation media is air. This solution of last mile does not require expensive optical fiber and establishing of connection is very simple. But there are some drawbacks which have a bad influence of quality of services and availability of the link. Number of phenomena in the atmosphere such as scattering, absorption and turbulence cause a large variation of receiving optical power and laser beam attenuation. The influence of absorption and turbulence can be significantly reduced by an appropriate design of FSO link. But the visibility has the main influence on quality of the optical transmission channel. Thus, in typical continental area where rain, snow or fog occurs is important to know their values. This article gives a description of device for measuring weather conditions and information about estimation of availability and reliability of FSO links in Slovakia.
Procedure for conducting a human-reliability analysis for nuclear power plants. Final report
Bell, B.J.; Swain, A.D.
1983-05-01
This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study.
Koen Cuypers
Full Text Available The goal of this study was to optimize the transcranial magnetic stimulation (TMS protocol for acquiring a reliable estimate of corticospinal excitability (CSE using single-pulse TMS. Moreover, the minimal number of stimuli required to obtain a reliable estimate of CSE was investigated. In addition, the effect of two frequently used stimulation intensities [110% relative to the resting motor threshold (rMT and 120% rMT] and gender was evaluated. Thirty-six healthy young subjects (18 males and 18 females participated in a double-blind crossover procedure. They received 2 blocks of 40 consecutive TMS stimuli at either 110% rMT or 120% rMT in a randomized order. Based upon our data, we advise that at least 30 consecutive stimuli are required to obtain the most reliable estimate for CSE. Stimulation intensity and gender had no significant influence on CSE estimation. In addition, our results revealed that for subjects with a higher rMT, fewer consecutive stimuli were required to reach a stable estimate of CSE. The current findings can be used to optimize the design of similar TMS experiments.
Cuypers, Koen; Thijs, Herbert; Meesen, Raf L J
2014-01-01
The goal of this study was to optimize the transcranial magnetic stimulation (TMS) protocol for acquiring a reliable estimate of corticospinal excitability (CSE) using single-pulse TMS. Moreover, the minimal number of stimuli required to obtain a reliable estimate of CSE was investigated. In addition, the effect of two frequently used stimulation intensities [110% relative to the resting motor threshold (rMT) and 120% rMT] and gender was evaluated. Thirty-six healthy young subjects (18 males and 18 females) participated in a double-blind crossover procedure. They received 2 blocks of 40 consecutive TMS stimuli at either 110% rMT or 120% rMT in a randomized order. Based upon our data, we advise that at least 30 consecutive stimuli are required to obtain the most reliable estimate for CSE. Stimulation intensity and gender had no significant influence on CSE estimation. In addition, our results revealed that for subjects with a higher rMT, fewer consecutive stimuli were required to reach a stable estimate of CSE. The current findings can be used to optimize the design of similar TMS experiments.
A Numerical Empirical Bayes Procedure for Finding an Interval Estimate.
Lord, Frederic M.
A numerical procedure is outlined for obtaining an interval estimate of a parameter in an empirical Bayes estimation problem. The case where each observed value x has a binomial distribution, conditional on a parameter zeta, is the only case considered. For each x, the parameter estimated is the expected value of zeta given x. The main purpose is…
Reliability of fish size estimates obtained from multibeam imaging sonar
Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.
2013-01-01
Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄ = −8.34, SE = 2.39) and white perch (x̄ = 14.48, SE = 3.99) but not striped bass (x̄ = 3.71, SE = 2.58) or channel catfish (x̄ = 3.97, SE = 5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of
Estimation of Small s-t Reliabilities in Acyclic Networks
Laumanns, Marco
2007-01-01
In the classical s-t network reliability problem a fixed network G is given including two designated vertices s and t (called terminals). The edges are subject to independent random failure, and the task is to compute the probability that s and t are connected in the resulting network, which is known to be #P-complete. In this paper we are interested in approximating the s-t reliability in case of a directed acyclic original network G. We introduce and analyze a specialized version of the Monte-Carlo algorithm given by Karp and Luby. For the case of uniform edge failure probabilities, we give a worst-case bound on the number of samples that have to be drawn to obtain an epsilon-delta approximation, being sharper than the original upper bound. We also derive a variance reduction of the estimator which reduces the expected number of iterations to perform to achieve the desired accuracy when applied in conjunction with different stopping rules. Initial computational results on two types of random networks (direc...
Estimating uncertainty and reliability of social network data using Bayesian inference.
Farine, Damien R; Strandburg-Peshkin, Ariana
2015-09-01
Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.
Comparisons of Estimation Procedures for Nonlinear Multilevel Models
Ali Reza Fotouhi
2003-05-01
Full Text Available We introduce General Multilevel Models and discuss the estimation procedures that may be used to fit multilevel models. We apply the proposed procedures to three-level binary data generated in a simulation study. We compare the procedures by two criteria, Bias and efficiency. We find that the estimates of the fixed effects and variance components are substantially and significantly biased using Longford's Approximation and Goldstein's Generalized Least Squares approaches by two software packages VARCL and ML3. These estimates are not significantly biased and are very close to real values when we use Markov Chain Monte Carlo (MCMC using Gibbs sampling or Nonparametric Maximum Likelihood (NPML approach. The Gaussian Quadrature (GQ approach, even with small number of mass points results in consistent estimates but computationally problematic. We conclude that the MCMC and the NPML approaches are the recommended procedures to fit multilevel models.
Procedures for estimation of genetic persistency indices for milk ...
Procedures for estimation of genetic persistency indices for milk production for the ... as included in the National Dairy Genetic Evaluations of South Africa, of the ... records to calculate 60-day and 280-day yields for each cow and lactation.
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-08-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature ( T max) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative distribution functions of the CDOCE and T max as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase in the probability of T max being greater than the resin degradation temperature. A similar trend is also seen for the probability of the CDOCE being less than 0.8.
Is biomass a reliable estimate of plant fitness?1
Younginger, Brett S.; Sirová, Dagmara; Cruzan, Mitchell B.; Ballhorn, Daniel J.
2017-01-01
The measurement of fitness is critical to biological research. Although the determination of fitness for some organisms may be relatively straightforward under controlled conditions, it is often a difficult or nearly impossible task in nature. Plants are no exception. The potential for long-distance pollen dispersal, likelihood of multiple reproductive events per inflorescence, varying degrees of reproductive growth in perennials, and asexual reproduction all confound accurate fitness measurements. For these reasons, biomass is frequently used as a proxy for plant fitness. However, the suitability of indirect fitness measurements such as plant size is rarely evaluated. This review outlines the important associations between plant performance, fecundity, and fitness. We make a case for the reliability of biomass as an estimate of fitness when comparing conspecifics of the same age class. We reviewed 170 studies on plant fitness and discuss the metrics commonly employed for fitness estimations. We find that biomass or growth rate are frequently used and often positively associated with fecundity, which in turn suggests greater overall fitness. Our results support the utility of biomass as an appropriate surrogate for fitness under many circumstances, and suggest that additional fitness measures should be reported along with biomass or growth rate whenever possible. PMID:28224055
Kramp, Kelvin H.; van Det, Marc J.; Veeger, Nic J. G. M.; Pierie, Jean-Pierre E. N.
2016-01-01
Background There is no widely used method to evaluate procedure-specific laparoscopic skills. The first aim of this study was to develop a procedure-based assessment method. The second aim was to compare its validity, reliability and feasibility with currently available global rating scales (GRSs).
Kramp, Kelvin H.; van Det, Marc J.; Veeger, Nic J. G. M.; Pierie, Jean-Pierre E. N.
2016-01-01
Background There is no widely used method to evaluate procedure-specific laparoscopic skills. The first aim of this study was to develop a procedure-based assessment method. The second aim was to compare its validity, reliability and feasibility with currently available global rating scales (GRSs).
Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L. [Pacific Northwest Lab., Richland, WA (United States)
1995-10-01
This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.
TWO-PROCEDURE OF MODEL RELIABILITY-BASED OPTIMIZATION FOR WATER DISTRIBUTION SYSTEMS
无
2000-01-01
Recently, considerable emphasis has been laid to the reliability-based optimization model for water distribution systems. But considerable computational effort is needed to determine the reliability-based optimal design of large networks, even of mid-sized networks. In this paper, a new methodology is presented for the reliability analysis for water distribution systems. This methodology consists of two procedures. The first is that the optimal design is constrained only by the pressure heads at demand nodes, done in GRG2. Because the reliability constrains are removed from the optimal problem, a number of simulations do not need to be conducted, so the computer time is greatly decreased. Then, the second procedure is a linear optimal search procedure. In this linear procedure, the optimal results obtained by GRG2 are adjusted by the reliability constrains. The results are a group of commercial diameters of pipes and the constraints of pressure heads and reliability at nodes are satisfied. Therefore, the computer burden is significantly decreased, and the reliability-based optimization is of more practical use.
40 CFR 98.275 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... for estimating missing data. A complete record of all measured parameters used in the GHG emissions... substitute data value for the missing parameter shall be used in the calculations, according to...
40 CFR 98.255 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... during unit operation or if a required fuel sample is not taken), a substitute data value for the...
40 CFR 98.265 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as...
40 CFR 98.215 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... for estimating missing data. (a) A complete record of all measured parameters used in the GHG... unavailable, a substitute data value for the missing parameter shall be used in the calculations as...
40 CFR 98.165 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations...., if a meter malfunctions during unit operation), a substitute data value for the missing...
40 CFR 98.365 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. (a) A complete record of all measured parameters used in the GHG emissions... substitute data value for the missing parameter shall be used in the calculations, according to...
40 CFR 98.175 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as...
40 CFR 98.295 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. For the emission calculation methodologies in § 98.293(b)(2) and (b)(3), a complete... unavailable, a substitute data value for the missing parameter shall be used in the calculations as...
40 CFR 98.115 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as...
40 CFR 98.345 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... for estimating missing data. A complete record of all measured parameters used in the GHG emissions... substitute data value for the missing parameter shall be used in the calculations, according to...
40 CFR 98.75 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... followed (e.g., if a meter malfunctions during unit operation), a substitute data value for the...
40 CFR 98.55 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in...
40 CFR 98.65 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations, according to the...
40 CFR 98.225 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in...
Reliability estimation for single-unit ceramic crown restorations.
Lekesiz, H
2014-09-01
The objective of this study was to evaluate the potential of a survival prediction method for the assessment of ceramic dental restorations. For this purpose, fast-fracture and fatigue reliabilities for 2 bilayer (metal ceramic alloy core veneered with fluorapatite leucite glass-ceramic, d.Sign/d.Sign-67, by Ivoclar; glass-infiltrated alumina core veneered with feldspathic porcelain, VM7/In-Ceram Alumina, by Vita) and 3 monolithic (leucite-reinforced glass-ceramic, Empress, and ProCAD, by Ivoclar; lithium-disilicate glass-ceramic, Empress 2, by Ivoclar) single posterior crown restorations were predicted, and fatigue predictions were compared with the long-term clinical data presented in the literature. Both perfectly bonded and completely debonded cases were analyzed for evaluation of the influence of the adhesive/restoration bonding quality on estimations. Material constants and stress distributions required for predictions were calculated from biaxial tests and finite element analysis, respectively. Based on the predictions, In-Ceram Alumina presents the best fast-fracture resistance, and ProCAD presents a comparable resistance for perfect bonding; however, ProCAD shows a significant reduction of resistance in case of complete debonding. Nevertheless, it is still better than Empress and comparable with Empress 2. In-Ceram Alumina and d.Sign have the highest long-term reliability, with almost 100% survivability even after 10 years. When compared with clinical failure rates reported in the literature, predictions show a promising match with clinical data, and this indicates the soundness of the settings used in the proposed predictions. © International & American Associations for Dental Research.
Simplified procedure for the estimation of Rankine power cycle efficiency
Patwardhan, V.R.; Devotta, S.; Patwardhan, V.S. (National Chemical Lab., Poona (India))
1989-01-01
A simplified procedure for estimating the Rankine power cycle efficiency eta{sub R} is presented. This procedure does not need any detailed thermodynamic data but requires only the liquid specific heat and the latent heat of vaporization at boiler temperature. This procedure is tested for its application to eight potential Rankine power cycle working fluids for which exact eta{sub R} values have been reported based on detailed thermodynamic data. A fairly wide range of condensing and boiling temperatures is covered. The results indicate that the present procedure can predict eta{sub R} values within +- 1%. (author).
Reliability Estimations of Control Systems Effected by Several Interference Sources
DengBei-xing; JiangMing-hu; LiXing
2003-01-01
In order to establish the sufficient and necessary condition that arbitrarily reliable systems can not be constructed with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of interference sources are raised respectively: interference source possessing real input vectors and constant reliable interferen cesource. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbitrarily reliable systems can not be constructed with the elements effected by above interference sources.
Generating human reliability estimates using expert judgment. Volume 1. Main report
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report.
Generating human reliability estimates using expert judgment. Volume 2. Appendices. [PWR; BWR
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessments (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 2 provides detailed procedures for using the techniques, detailed descriptions of the analyses performed to evaluate the techniques, and HEP estimates generated as part of this project. The results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. Judgments were shown to be consistent and to provide HEP estimates with a good degree of convergent validity. Of the two techniques tested, direct numerical estimation appears to be preferable in terms of ease of application and quality of results.
Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.
Proppe, Jonny; Reiher, Markus
2017-07-11
One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M
Reliability Estimations of Control Systems Effected by Several Interference Sources
Deng Bei-xing; Jiang Ming-hu; Li Xing
2003-01-01
In order to estab lish the sufficient and necessary condition that arbitrarily reliable systems can not be construc-ted with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of in-terference sources are raised respectively: interference source possessing real input vectors and constant reliable interference source. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbi-trarily reliable systems can not be constructed with the ele-ments effected by above interference sources.
An easy and reliable automated method to estimate oxidative stress in the clinical setting.
Vassalle, Cristina
2008-01-01
During the last few years, reliable and simple tests have been proposed to estimate oxidative stress in vivo. Many of them can be easily adapted to automated analyzers, permitting the simultaneous processing of a large number of samples in a greatly reduced time, avoiding manual sample and reagent handling, and reducing variability sources. In this chapter, description of protocols for the estimation of reactive oxygen metabolites and the antioxidant capacity (respectively the d-ROMs and OXY Adsorbent Test, Diacron, Grosseto, Italy) by using the clinical chemistry analyzer SYNCHRON, CX 9 PRO (Beckman Coulter, Brea, CA, USA) is reported as an example of such an automated procedure that can be applied in the clinical setting. Furthermore, a calculation to compute a global oxidative stress index (Oxidative-INDEX), reflecting both oxidative and antioxidant counterparts, and, therefore, a potentially more powerful parameter, is also described.
A procedure to estimate proximate analysis of mixed organic wastes.
Zaher, U; Buffiere, P; Steyer, J P; Chen, S
2009-04-01
In waste materials, proximate analysis measuring the total concentration of carbohydrate, protein, and lipid contents from solid wastes is challenging, as a result of the heterogeneous and solid nature of wastes. This paper presents a new procedure that was developed to estimate such complex chemical composition of the waste using conventional practical measurements, such as chemical oxygen demand (COD) and total organic carbon. The procedure is based on mass balance of macronutrient elements (carbon, hydrogen, nitrogen, oxygen, and phosphorus [CHNOP]) (i.e., elemental continuity), in addition to the balance of COD and charge intensity that are applied in mathematical modeling of biological processes. Knowing the composition of such a complex substrate is crucial to study solid waste anaerobic degradation. The procedure was formulated to generate the detailed input required for the International Water Association (London, United Kingdom) Anaerobic Digestion Model number 1 (IWA-ADM1). The complex particulate composition estimated by the procedure was validated with several types of food wastes and animal manures. To make proximate analysis feasible for validation, the wastes were classified into 19 types to allow accurate extraction and proximate analysis. The estimated carbohydrates, proteins, lipids, and inerts concentrations were highly correlated to the proximate analysis; correlation coefficients were 0.94, 0.88, 0.99, and 0.96, respectively. For most of the wastes, carbohydrate was the highest fraction and was estimated accurately by the procedure over an extended range with high linearity. For wastes that are rich in protein and fiber, the procedure was even more consistent compared with the proximate analysis. The new procedure can be used for waste characterization in solid waste treatment design and optimization.
A particle swarm model for estimating reliability and scheduling system maintenance
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Early Stage Software Reliability Estimation with Stochastic Reward Nets
ZHAO Jing; LIU Hong-wei; CUI Gang; YANG Xiao-zong
2005-01-01
This paper presents software reliability modeling issues at the early stage of a software development for fault tolerant software management system. Based on Stochastic Reward Nets, an effective model of hierarchical view for a fault tolerant software management system is put forward, and an approach that consists of system transient performance analysis is adopted. A quantitative approach for software reliability analysis is given. The results show its usefulness for the design and evaluation of the early-stage software reliability modeling when failure data is not available.
40 CFR 98.85 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations. The owner or operator...
40 CFR 98.145 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations is... in § 98.144 cannot be followed and data is missing, you must use the most appropriate of the...
40 CFR 98.185 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations as specified in...
Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne
2008-03-01
The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.
Improving Sample Estimate Reliability and Validity with Linked Ego Networks
Lu, Xin
2012-01-01
Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...
Spanning Trees and bootstrap reliability estimation in correlation based networks
Tumminello, M; Lillo, F; Micciché, S; Mantegna, R N
2006-01-01
We introduce a new technique to associate a spanning tree to the average linkage cluster analysis. We term this tree as the Average Linkage Minimum Spanning Tree. We also introduce a technique to associate a value of reliability to links of correlation based graphs by using bootstrap replicas of data. Both techniques are applied to the portfolio of the 300 most capitalized stocks traded at New York Stock Exchange during the time period 2001-2003. We show that the Average Linkage Minimum Spanning Tree recognizes economic sectors and sub-sectors as communities in the network slightly better than the Minimum Spanning Tree does. We also show that the average reliability of links in the Minimum Spanning Tree is slightly greater than the average reliability of links in the Average Linkage Minimum Spanning Tree.
Khan, M.A.; Kerkhoff, Hans G.
2013-01-01
Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an
Khan, Muhammad Aamir; Kerkhoff, Hans G.
2013-01-01
Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an
Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.
The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....
Engineer’s estimate reliability and statistical characteristics of bids
Fariborz M. Tehrani
2016-12-01
Full Text Available The objective of this report is to provide a methodology for examining bids and evaluating the performance of engineer’s estimates in capturing the true cost of projects. This study reviews the cost development for transportation projects in addition to two sources of uncertainties in a cost estimate, including modeling errors and inherent variability. Sample projects are highway maintenance projects with a similar scope of the work, size, and schedule. Statistical analysis of engineering estimates and bids examines the adaptability of statistical models for sample projects. Further, the variation of engineering cost estimates from inception to implementation has been presented and discussed for selected projects. Moreover, the applicability of extreme values theory is assessed for available data. The results indicate that the performance of engineer’s estimate is best evaluated based on trimmed average of bids, excluding discordant bids.
Estimating the Reliability of Electronic Parts in High Radiation Fields
Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd
2008-01-01
Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.
HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES, PART TWO: APPLICABILITY OF CURRENT METHODS
Ronald L. Boring; David I. Gertman
2012-10-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no U.S. nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
CONSIDERATIONS FOR THE TREATMENT OF COMPUTERIZED PROCEDURES IN HUMAN RELIABILITY ANALYSIS
Ronald L. Boring; David I. Gertman
2012-07-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
Reliability of Bluetooth Technology for Travel Time Estimation
Araghi, Bahar Namaki; Olesen, Jonas Hammershøj; Krishnan, Rajesh
2015-01-01
.1 seconds), the size and shape of the sensor's detection zone, and the time span that the Bluetooth-enabled device is within the detection zone. The influences of size of Bluetooth sensor detection zones and Bluetooth discovery procedure on multiple detection events have been mentioned in previous research...
Hanine, M. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France)]. E-mail: Mounir.Hanine@univ-rouen.fr; Masmoudi, M. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France); Marcon, J. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France)
2004-12-15
In this paper, a reliable procedure, which allows a fine as well as a robust analysis of the deep defects in semiconductors, is detailed. In this procedure where capacitance transients are considered as multiexponential and corrupted with Gaussian noise, our new method of analysis, the Levenberg-Marquardt deep level transient spectroscopy (LM-DLTS) is associated with two other high-resolution techniques, i.e. the Matrix Pencil which provides an approximation of exponential components contained in the capacitance transients and Prony's method recently revised by Osborne in order to set the initial parameters.
Waller, R.A.
1977-06-01
A Bayesian-Zero-Failure (BAZE) reliability demonstration testing procedure is presented. The method is developed for an exponential failure-time model and a gamma prior distribution on the failure-rate. A simple graphical approach using percentiles is used to fit the prior distribution. The procedure is given in an easily applied step-by-step form which does not require the use of a computer for its implementation. The BAZE approach is used to obtain sample test plans for selected components of nuclear reactor safety systems.
Steven E. Stemler
2004-03-01
Full Text Available This article argues that the general practice of describing interrater reliability as a single, unified concept is..at best imprecise, and at worst potentially misleading. Rather than representing a single concept, different..statistical methods for computing interrater reliability can be more accurately classified into one of three..categories based upon the underlying goals of analysis. The three general categories introduced and..described in this paper are: 1 consensus estimates, 2 consistency estimates, and 3 measurement estimates...The assumptions, interpretation, advantages, and disadvantages of estimates from each of these three..categories are discussed, along with several popular methods of computing interrater reliability coefficients..that fall under the umbrella of consensus, consistency, and measurement estimates. Researchers and..practitioners should be aware that different approaches to estimating interrater reliability carry with them..different implications for how ratings across multiple judges should be summarized, which may impact the..validity of subsequent study results.
Novel Software Reliability Estimation Model for Altering Paradigms of Software Engineering
Ritika Wason
2012-05-01
Full Text Available A number of different software engineering paradigms like Component-Based Software Engineering (CBSE, Autonomic Computing, Service-Oriented Computing (SOC, Fault-Tolerant Computing and many others are being researched currently. These paradigms denote a paradigm shift from the currently mainstream object-oriented paradigm and are altering the way we view, design, develop and exercise software. Though these paradigms indicate a major shift in the way we design and code software. However, we still rely on traditional reliability models for estimating the reliability of any of the above systems. This paper analyzes the underlying characteristics of these paradigms and proposes a novel Finite Automata Based Reliability model as a suitable model for estimating reliability of modern, complex, distributed and critical software applications. We further outline the basic framework for an intelligent, automata-based reliability model that can be used for accurate estimation of system reliability of software systems at any point in the software life cycle.
Estimation on the Reliability of Farm Vehicle Based on Artificial Neural Network
WANG Jinwu
2008-01-01
As a peculiar product in China today, farm vehicles play an important role in economic construction and development of the countryside, but its work reliability remains low. In this paper truncated tracking was used to solve the low reliability of farm vehicles. Relevant reliability data were obtained by tracking a certain model vehicle and conducting reliability experiments. Data analysis revealed the weakest part of the vehicle system was the engine assembly. The theory of Artificial Neural Network was employed to estimate a parameter of the reliability model based on self-adaptive linear neural network, and the reliability function educed by the estimation could provide important theory references for reliability reassignment, manufacture and management of farm transport vehicles.
Software Reliability Estimation of the Reactor Protection System for Lungmen Nuclear Power Station
Wang, Jung Ya; Chou, Hwai Pwu [Tsing Hua National University, Hsinchu (China)
2014-08-15
In this paper, a software reliability estimation method is applied to estimate the software reliability of the reactor protection system (RPS) for Lungmen ABWR. In order to estimate the software failure probability, a flow network model of software is constructed. The total number of executions and the execution time of each software statement are obtained, and the reliability of each statement is obtained. During the test, the one-time test scenario follows a Bernoulli distribution and the multiple-test scenarios follow a binomial distribution. The software reliability of the digital trip module (DTM) and the trip logic unit (TLU) of the RPS of Lungmen ABWR can then be estimated. The results show that the RPS software has a good reliability.
Reliable estimation of orbit errors in spaceborne SAR interferometry
Bähr, H.; Hanssen, R.F.
2012-01-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of
Space Heating Load Estimation Procedure for CHP Systems sizing
Vocale, P.; Pagliarini, G.; Rainieri, S.
2015-11-01
Due to its environmental and energy benefits, the Combined Heat and Power (CHP) represents certainly an important measure to improve energy efficiency of buildings. Since the energy performance of the CHP systems strongly depends on the fraction of the useful cogenerated heat (i.e. the cogenerated heat that is actually used to meet building thermal demand), in building applications of CHP, it is necessary to know the space heating and cooling loads profile to optimise the system efficiency. When the heating load profile is unknown or difficult to calculate with a sufficient accuracy, as may occur for existing buildings, it can be estimated from the cumulated energy uses by adopting the loads estimation procedure (h-LEP). With the aim to evaluate the useful fraction of the cogenerated heat for different operating conditions in terms of buildings characteristics, weather data and system capacity, the h-LEP is here implemented with a single climate variable: the hourly average dry- bulb temperature. The proposed procedure have been validated resorting to the TRNSYS simulation tool. The results, obtained by considering a building for hospital use, reveal that the useful fraction of the cogenerated heat can be estimated with an average accuracy of ± 3%, within the range of operative conditions considered in the present study.
Numerical Model based Reliability Estimation of Selective Laser Melting Process
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Calibration procedure of measuring system for vehicle wheel load estimation
Kluziewicz, M.; Maniowski, M.
2016-09-01
The calibration procedure of wheel load measuring system is presented. Designed method allows estimation of selected wheel load components while the vehicle is in motion. Mentioned system is developed to determine friction forces between tire and road surface, basing on measured internal reaction forces in wheel suspension mechanism. Three strain gauge bridges and three-component piezoelectric load cell are responsible for internal force measurement in suspension components, two wire sensors are measuring displacements. External load is calculated via kinematic model of suspension mechanism implemented in Matlab environment. In the described calibration procedure, internal reactions are measured on a test stand while the system is loaded by a force of known direction and value.
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis
Gustavo Rech
2013-03-01
Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.
Empirical Study of Travel Time Estimation and Reliability
Ruimin Li; Huajun Chai; Jin Tang
2013-01-01
This paper explores the travel time distribution of different types of urban roads, the link and path average travel time, and variance estimation methods by analyzing the large-scale travel time dataset detected from automatic number plate readers installed throughout Beijing. The results show that the best-fitting travel time distribution for different road links in 15 min time intervals differs for different traffic congestion levels. The average travel time for all links on all days can b...
Reliability of panoramic radiography in chronological age estimation
Ramanpal Singh Makkad
2013-01-01
Full Text Available Introduction: There has been a strong relationship between the growth rate of bone and teeth, which can be utilized for the purpose of age identification of an individual. Aims and Objective: The present study was designed to determine the relationship between the dental age, the age from dental panoramic radiography, skeletal age, and chronological age. Materials and Methods: The study included 270 individuals, averaging between 17 years and 25 years of age from out-patient department of New Horizon Dental College and Hospital, Sakri, Bilaspur, Chhattisgarh, India, for third molar surgery. Panoramic and hand wrist radiographs were taken, the films were digitally processed for visualization of the wisdom teeth. The confirmations of ages were repeated again at an interval of 4 weeks by a radiologist. The extracted wisdom teeth were placed in 10% formalin and were examined by one dental surgeon to estimate the age on the basis of root formation. Student′s t-test was adopted for statistical analysis and probability (P value was calculated. Conclusion: Estimating the age of an individual was accurate by examining extracted third molar. Age estimation through panoramic radiography was highly accurate in upper right quadrant (mean = 0.72 and P = 0.077.
Reliability estimation for single dichotomous items based on Mokken's IRT model
Meijer, R R; Sijtsma, K; Molenaar, Ivo W
1995-01-01
Item reliability is of special interest for Mokken's nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of
Reliability estimation for single dichotomous items based on Mokken's IRT model
Meijer, Rob R.; Sijtsma, Klaas; Molenaar, Ivo W.
1995-01-01
Item reliability is of special interest for Mokken’s nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of
Coefficient Alpha as an Estimate of Test Reliability under Violation of Two Assumptions.
Zimmerman, Donald W.; And Others
1993-01-01
Coefficient alpha was examined through computer simulation as an estimate of test reliability under violation of two assumptions. Coefficient alpha underestimated reliability under violation of the assumption of essential tau-equivalence of subtest scores and overestimated it under violation of the assumption of uncorrelated subtest error scores.…
Estimation of Internal Consistency Reliability When Test Parts Vary in Effective Length.
Feldt, Leonard S.; Charter, Richard A.
2003-01-01
Evaluating a test's reliability often requires dividing it into 3 or more unequal parts, which causes violation of the tau equivalence assumption of Cronbach's alpha. This article presents a criterion for abandoning alpha and an approach for computing a more appropriate estimate of reliability, the Gilmer-Feldt coefficient. (Author)
Khan, M.A.; Kerkhoff, Hans G.
2013-01-01
System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring
Khan, Muhammad Aamir; Kerkhoff, Hans G.
2013-01-01
System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring
Reliability/Cost Evaluation on Power System connected with Wind Power for the Reserve Estimation
Lee, Go-Eun; Cha, Seung-Tae; Shin, Je-Seok;
2012-01-01
Wind power is ideally a renewable energy with no fuel cost, but has a risk to reduce reliability of the whole system because of uncertainty of the output. If the reserve of the system is increased, the reliability of the system may be improved. However, the cost would be increased. Therefore...... the reserve needs to be estimated considering the trade-off between reliability and economic aspects. This paper suggests a methodology to estimate the appropriate reserve, when wind power is connected to the power system. As a case study, when wind power is connected to power system of Korea, the effects...
An adaptive neuro fuzzy model for estimating the reliability of component-based software systems
Kirti Tyagi
2014-01-01
Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Wang, Guoyu; Houkes, Zweitze; Ji, Guangrong; Zheng, Bing; Li, Xin
2003-01-01
This paper presents a new algorithm for estimation-based range image segmentation. Aiming at surface-primitive extraction from range data, we focus on the reliability of the primitive representation in the process of region estimation. We introduce an optimal description of surface primitives, by wh
Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis
Zhou, Z.-G.; Tang, P.; Zhou, M.
2016-06-01
Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.
How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?
Short, Michelle A; Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A
2017-03-01
To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test-retest reliability of sleep diary estimates of school night sleep across 12 weeks. Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test-retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test-rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks.
van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan
2013-01-01
The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…
van Iterson, L.; Augustijn, P.B.; de Jong, P.F.; van der Leij, A.
2013-01-01
The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a referenc
van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan
2013-01-01
The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…
A Data-Driven Reliability Estimation Approach for Phased-Mission Systems
Hua-Feng He
2014-01-01
Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.
Hardy, Alexandre; Loriaut, Philippe; Granger, Benjamin; Neffati, Ahmed; Massein, Audrey; Casabianca, Laurent; Pascal-Moussellard, Hugues; Gerometta, Antoine
2016-10-12
The arthroscopic Latarjet procedure has provided reliable results in the treatment of anterior shoulder instability. However, this procedure remains technically challenging and is related to several complications. The morphology of the coracoid and the glenoid are inconsistent. Inadequate coracoid and glenoid preparing may lead to mismatching between their surfaces. Inadequate screws lengthening and orientation are a major concern. Too long screws can lead to suprascapular nerve injuries or hardware irritation, whereas too short screws can lead to nonunions, fibrous unions or migration of the bone block. The purpose of the study was to investigate the application of virtual surgical planning and digital technology in preoperative assessment and planning of the Latarjet procedure. Twelve patients planned for an arthroscopic Latarjet had a CT scan evaluation with multi-two-dimensional reconstruction performed before surgery. Interobserver and intraobserver reliability were evaluated. The shape of the anterior rim of the glenoid and the undersurface of the coracoid were classified. Coracoid height was measured, respectively, at 5 mm (C1) and 10 mm (C2) from the tip of the coracoid process, corresponding to the drilling zone. Measurements of the glenoid width were then taken in the axial view at 25 % (G1) and 50 % (G2) of the glenoid height with various α angles (5°, 10°, 15°, 20°, 25°, 30°) 7 mm from the anterior glenoid rim. Shapes of the undersurface of the coracoid and the anterior rim of the glenoid were noted during the surgical procedure. Post-operative measurements included the α angle. Concerning coracoid height measurements, there was an almost perfect to substantial intra- and inter-reliability, with values ranging from ICC = 0.75-0.97. For the shape of the coracoid, concordances were, respectively, perfect (ICC = 1) and almost perfect (0.87 [0.33; 1]) for the intra- and interobserver reliabilities. Concerning the glenoid, concordance was
Estimating Between-Person and Within-Person Subscore Reliability with Profile Analysis.
Bulut, Okan; Davison, Mark L; Rodriguez, Michael C
2017-01-01
Subscores are of increasing interest in educational and psychological testing due to their diagnostic function for evaluating examinees' strengths and weaknesses within particular domains of knowledge. Previous studies about the utility of subscores have mostly focused on the overall reliability of individual subscores and ignored the fact that subscores should be distinct and have added value over the total score. This study introduces a profile reliability approach that partitions the overall subscore reliability into within-person and between-person subscore reliability. The estimation of between-person reliability and within-person reliability coefficients is demonstrated using subscores from number-correct scoring, unidimensional and multidimensional item response theory scoring, and augmented scoring approaches via a simulation study and a real data study. The effects of various testing conditions, such as subtest length, correlations among subscores, and the number of subtests, are examined. Results indicate that there is a substantial trade-off between within-person and between-person reliability of subscores. Profile reliability coefficients can be useful in determining the extent to which subscores provide distinct and reliable information under various testing conditions.
Influence Factors on the Value of Reliability Estimators in Marketing Research
2011-01-01
This paper is a literature review, with a conclusion that leaves open many doors for future research. In the first part are reviewed a series of qualitative and quantitative research characteristics. The second part explains briefly the reliability and validity of instruments used in qualitative and quantitative marketing research. The third part of the paper review a series of articles on the estimators of reliability, on their power, on their strengths and weaknesses. The conclusions of the...
Learning curve estimation in medical devices and procedures: hierarchical modeling.
Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S
2017-07-30
In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Reliability estimation for multiunit nuclear and fossil-fired industrial energy systems
Sullivan, W. G.; Wilson, J. V.; Klepper, O. H.
1977-06-29
As petroleum-based fuels grow increasingly scarce and costly, nuclear energy may become an important alternative source of industrial energy. Initial applications would most likely include a mix of fossil-fired and nuclear sources of process energy. A means for determining the overall reliability of these mixed systems is a fundamental aspect of demonstrating their feasibility to potential industrial users. Reliability data from nuclear and fossil-fired plants are presented, and several methods of applying these data for calculating the reliability of reasonably complex industrial energy supply systems are given. Reliability estimates made under a number of simplifying assumptions indicate that multiple nuclear units or a combination of nuclear and fossil-fired plants could provide adequate reliability to meet industrial requirements for continuity of service.
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.
ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Implementation and Analysis of Probabilistic Methods for Gate-Level Circuit Reliability Estimation
WANG Zhen; JIANG Jianhui; YANG Guang
2007-01-01
The development of VLSI technology results in the dramatically improvement of the performance of integrated circuits. However, it brings more challenges to the aspect of reliability. Integrated circuits become more susceptible to soft errors. Therefore, it is imperative to study the reliability of circuits under the soft error. This paper implements three probabilistic methods (two pass, error propagation probability, and probabilistic transfer matrix) for estimating gate-level circuit reliability on PC. The functions and performance of these methods are compared by experiments using ISCAS85 and 74-series circuits.
An Allocation Scheme for Estimating the Reliability of a Parallel-Series System
Zohra Benkamra
2012-01-01
Full Text Available We give a hybrid two-stage design which can be useful to estimate the reliability of a parallel-series and/or by duality a series-parallel system. When the components' reliabilities are unknown, one can estimate them by sample means of Bernoulli observations. Let T be the total number of observations allowed for the system. When T is fixed, we show that the variance of the system reliability estimate can be lowered by allocation of the sample size T at components' level. This leads to a discrete optimization problem which can be solved sequentially, assuming T is large enough. First-order asymptotic optimality is proved systematically and validated T Monte Carlo simulation.
Akatsuki eKimura
2015-03-01
Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Virtual estimates of fastening strength for pedicle screw implantation procedures
Linte, Cristian A.; Camp, Jon J.; Augustine, Kurt E.; Huddleston, Paul M.; Robb, Richard A.; Holmes, David R.
2014-03-01
Traditional 2D images provide limited use for accurate planning of spine interventions, mainly due to the complex 3D anatomy of the spine and close proximity of nerve bundles and vascular structures that must be avoided during the procedure. Our previously developed clinician-friendly platform for spine surgery planning takes advantage of 3D pre-operative images, to enable oblique reformatting and 3D rendering of individual or multiple vertebrae, interactive templating, and placement of virtual pedicle implants. Here we extend the capabilities of the planning platform and demonstrate how the virtual templating approach not only assists with the selection of the optimal implant size and trajectory, but can also be augmented to provide surrogate estimates of the fastening strength of the implanted pedicle screws based on implant dimension and bone mineral density of the displaced bone substrate. According to the failure theories, each screw withstands a maximum holding power that is directly proportional to the screw diameter (D), the length of the in-bone segm,ent of the screw (L), and the density (i.e., bone mineral density) of the pedicle body. In this application, voxel intensity is used as a surrogate measure of the bone mineral density (BMD) of the pedicle body segment displaced by the screw. We conducted an initial assessment of the developed platform using retrospective pre- and post-operative clinical 3D CT data from four patients who underwent spine surgery, consisting of a total of 26 pedicle screws implanted in the lumbar spine. The Fastening Strength of the planned implants was directly assessed by estimating the intensity - area product across the pedicle volume displaced by the virtually implanted screw. For post-operative assessment, each vertebra was registered to its homologous counterpart in the pre-operative image using an intensity-based rigid registration followed by manual adjustment. Following registration, the Fastening Strength was computed
Reliability estimation for 18Ni steel under low cycle fatigue using probabilistic technique
Lee, Ouk Sub; Choi, Hye Bin; Kim, Dong Hyeok; Kim, Hong Min [Inha Univ., Incheon (Korea, Republic of)
2008-07-01
In this study, the fatigue life of 18Ni Maraging steel under both low and high cyclic conditions is estimated by using FORM (First Order Reliability Method). Fatigue models based on strain approach such as coffin? Manson Fatigue theory and Morrow mean stress method are utilized. The limit state function including these two models was established. A case study for a material with the given special material properties was carried out to show the application of the proposed process of the reliability estimation. The effect of mean stress of the varying fatigue loading on the failure probability has also been investigated.
Estimation of Reliability and Cost Relationship for Architecture-based Software
Hui Guan; Wei-Ru Chen; Ning Huang; Hong-Ji Yang
2010-01-01
In this paper, we propose a new method to estimate the relationship between software reliability and software development cost taking into account the complexity for developing the software system and the size of software intended to develop during the implementation phase of the software development life cycle. On the basis of estimated relationship, a set of empirical data has been used to validate the correctness of the proposed model by comparing the result with the other existing models. The outcome of this work shows that the method proposed here is a relatively straightforward one in formulating the relationship between reliability and cost during implementation phase.
Estimated Value of Service Reliability for Electric Utility Customers in the United States
Sullivan, M.J.; Mercurio, Matthew; Schellenberg, Josh
2009-06-01
Information on the value of reliable electricity service can be used to assess the economic efficiency of investments in generation, transmission and distribution systems, to strategically target investments to customer segments that receive the most benefit from system improvements, and to numerically quantify the risk associated with different operating, planning and investment strategies. This paper summarizes research designed to provide estimates of the value of service reliability for electricity customers in the US. These estimates were obtained by analyzing the results from 28 customer value of service reliability studies conducted by 10 major US electric utilities over the 16 year period from 1989 to 2005. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods it was possible to integrate their results into a single meta-database describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the US for industrial, commercial, and residential customers. Estimated interruption costs for different types of customers and of different duration are provided. Finally, additional research and development designed to expand the usefulness of this powerful database and analysis are suggested.
2014-01-01
The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ≥ 100. In addition, the optimal α-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728
Donald D. Anderson
2012-01-01
Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.
Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens;
We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS
Z.-G. Zhou
2016-06-01
Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.
The Riso-Hudson Enneagram Type Indicator: Estimates of Reliability and Validity
Newgent, Rebecca A.; Parr, Patricia E.; Newman, Isadore; Higgins, Kristin K.
2004-01-01
This investigation was conducted to estimate the reliability and validity of scores on the Riso-Hudson Enneagram Type Indicator (D. R. Riso & R. Hudson, 1999a). Results of 287 participants were analyzed. Alpha suggests an adequate degree of internal consistency. Evidence provides mixed support for construct validity using correlational and…
Chaimowicz, F. (Flávio); A. Burdorf (Alex)
2015-01-01
textabstractBackground: The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia
Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…
Boermans, M.A.; Kattenberg, M.A.C.
2011-01-01
We show how to estimate a Cronbach's alpha reliability coefficient in Stata after running a principal component or factor analysis. Alpha evaluates to what extent items measure the same underlying content when the items are combined into a scale or used for latent variable. Stata allows for testing
Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…
Reliable dual tensor model estimation in single and crossing fibers based on jeffreys prior
J. Yang (Jianfei); D.H.J. Poot; M.W.A. Caan (Matthan); Su, T. (Tanja); C.B. Majoie (Charles); L.J. van Vliet (Lucas); F. Vos (Frans)
2016-01-01
textabstractPurpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD).
Reliability and validity of procedure-based assessments in otolaryngology training.
Awad, Zaid; Hayden, Lindsay; Robson, Andrew K; Muthuswamy, Keerthini; Tolley, Neil S
2015-06-01
To investigate the reliability and construct validity of procedure-based assessment (PBA) in assessing performance and progress in otolaryngology training. Retrospective database analysis using a national electronic database. We analyzed PBAs of otolaryngology trainees in North London from core trainees (CTs) to specialty trainees (STs). The tool contains six multi-item domains: consent, planning, preparation, exposure/closure, technique, and postoperative care, rated as "satisfactory" or "development required," in addition to an overall performance rating (pS) of 1 to 4. Individual domain score, overall calculated score (cS), and number of "development-required" items were calculated for each PBA. Receiver operating characteristic analysis helped determine sensitivity and specificity. There were 3,152 otolaryngology PBAs from 46 otolaryngology trainees analyzed. PBA reliability was high (Cronbach's α 0.899), and sensitivity approached 99%. cS correlated positively with pS and level in training (rs : +0.681 and +0.324, respectively). ST had higher cS and pS than CT (93% ± 0.6 and 3.2 ± 0.03 vs. 71% ± 3.1 and 2.3 ± 0.08, respectively; P otolaryngology trainees' performance and progress at all levels. It is highly sensitive in identifying competent trainees. The tool is used in a formative and feedback capacity. The technical domain is the best predictor and should be given close attention. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Comparison of Estimation Procedures for Multilevel AR(1 Models
Tanja eKrone
2016-04-01
Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.
Alaa F. Sheta
2016-04-01
Full Text Available In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM, the Power Model (POWM and the Delayed S-Shaped Model (DSSM. In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
High-reliability microcontroller nerve stimulator for assistance in regional anaesthesia procedures.
Ferri, Carlos A; Quevedo, Antonio A F
2017-07-01
In the last decades, the use of nerve stimulators to aid in regional anaesthesia has been shown to benefit the patient since it allows a better location of the nerve plexus, leading to correct positioning of the needle through which the anaesthetic is applied. However, most of the nerve stimulators available in the market for this purpose do not have the minimum recommended features for a good stimulator, and this can lead to risks to the patient. Thus, this study aims to develop an equipment, using embedded electronics, which meets all the characteristics, for a successful blockade. The system is made of modules for generation and overall control of the current pulse and the patient and user interfaces. The results show that the designed system fits into required specifications for a good and reliable nerve stimulator. Linearity proved satisfactory, ensuring accuracy in electrical current amplitude for a wide range of body impedances. Field tests have proven very successful. The anaesthesiologist that used the system reported that, in all cases, plexus blocking was achieved with higher quality, faster anaesthetic diffusion and without needed of an additional dose when compared with same procedure without the use of the device.
Reliable and Efficient Procedure for Steady-State Analysis of Nonautonomous and Autonomous Systems
J. Dobes
2012-04-01
Full Text Available The majority of contemporary design tools do not still contain steady-state algorithms, especially for the autonomous systems. This is mainly caused by insufficient accuracy of the algorithm for numerical integration, but also by unreliable steady-state algorithms themselves. Therefore, in the paper, a very stable and efficient procedure for the numerical integration of nonlinear differential-algebraic systems is defined first. Afterwards, two improved methods are defined for finding the steady state, which use this integration algorithm in their iteration loops. The first is based on the idea of extrapolation, and the second utilizes nonstandard time-domain sensitivity analysis. The two steady-state algorithms are compared by analyses of a rectifier and a C-class amplifier, and the extrapolation algorithm is primarily selected as a more reliable alternative. Finally, the method based on the extrapolation naturally cooperating with the algorithm for solving the differential-algebraic systems is thoroughly tested on various electronic circuits: Van der Pol and Colpitts oscillators, fragment of a large bipolar logical circuit, feedback and distributed microwave oscillators, and power amplifier. The results confirm that the extrapolation method is faster than a classical plain numerical integration, especially for larger circuits with complicated transients.
Estimation and enhancement of real-time software reliability through mutation analysis
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
Ways to increase the reliability of earthquake loss estimations in emergency mode
Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander
2016-04-01
The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage
Is visual estimation of passive range of motion in the pediatric lower limb valid and reliable
Dagher Fernand
2009-10-01
Full Text Available Abstract Background Visual estimation (VE is an essential tool for evaluation of range of motion. Few papers discussed its validity in children orthopedics' practice. The purpose of our study was to assess validity and reliability of VE for passive range of motions (PROMs of children's lower limbs. Methods Fifty typically developing children (100 lower limbs were examined. Visual estimations for PROMs of hip (flexion, adduction, abduction, internal and external rotations, knee (flexion and popliteal angle and ankle (dorsiflexion and plantarflexion were made by a pediatric orthopaedic surgeon (POS and a 5th year resident in orthopaedics. A last year medical student did goniometric measurements. Three weeks later, same measurements were performed to assess reliability of visual estimation for each examiner. Results Visual estimations of the POS were highly reliable for hip flexion, hip rotations and popliteal angle (ρc ≥ 0.8. Reliability was good for hip abduction, knee flexion, ankle dorsiflexion and plantarflexion (ρc ≥ 0.7 but poor for hip adduction (ρc = 0.5. Reproducibility for all PROMs was verified. Resident's VE showed high reliability (ρc ≥ 0.8 for hip flexion and popliteal angle. Good correlation was found for hip rotations and knee flexion (ρc ≥ 0.7. Poor results were obtained for ankle PROMs (ρc Conclusion Accuracy of VE of passive hip flexion and knee PROMs is high regardless of the examiner's experience. Same accuracy can be found for hip rotations and abduction whenever VE is performed by an experienced examiner. Goniometric evaluation is recommended for passive hip adduction and for ankle PROMs.
Effective dose estimation to patients and staff during urethrography procedures
Sulieman, A. [Prince Sattam bin Abdulaziz University, College of Applied Medical Sciences, Radiology and Medical Imaging Department, P. O- Box 422, Alkharj 11942 (Saudi Arabia); Barakat, H. [Neelain University, College of Science and Technology, Medical Physics Department, Khartoum (Sudan); Alkhorayef, M.; Babikir, E. [King Saud University, College of Applied Sciences, Radiological Sciences Department, P. O. Box 10219, Riyadh 11433 (Saudi Arabia); Dalton, A.; Bradley, D. [University of Surrey, Centre for Nuclear and Radiation Physics, Department of Physics, Surrey, GU2 7XH Guildford (United Kingdom)
2015-10-15
Medical-related radiation is the largest source of controllable radiation exposure to humans and it accounts for more than 95% of radiation exposure from man-made sources. Few data were available worldwide regarding patient and staff dose during urological ascending urethrography (ASU) procedure. The purposes of this study are to measure patient and staff entrance surface air kerma dose (ESAK) during ASU procedure and evaluate the effective doses. A total of 243 patients and 145 staff (Urologist) were examined in three Hospitals in Khartoum state. ESAKs were measured for patient and staff using thermoluminescent detectors (TLDs). Effective doses (E) were calculated using published conversion factors and methods recommended by the national Radiological Protection Board (NRPB). The mean ESAK dose for patients and staff dose were 7.79±6.7 mGy and 0.161±0.30 mGy per procedures respectively. The mean and range of the effective dose was 1.21 mSv per procedure. The radiation dose in this study is comparable with previous studies except Hospital C. It is obvious that high patient and staff exposure is due to the lack of experience and protective equipment s. Interventional procedures remain operator dependent; therefore continuous training is crucial. (Author)
J. Gogoi
2012-01-01
Full Text Available This paper deals with the stress vs. strength problem incorporating multi-componentsystems viz. standby redundancy. The models developed have been illustrated assuming that allthe components in the system for both stress and strength are independent and follow differentprobability distributions viz. Exponential, Gamma and Lindley. Four different conditions forstress and strength have been considered for this investigation. Under these assumptions thereliabilities of the system have been obtained with the help of the particular forms of densityfunctions of n-standby system when all stress-strengths are random variables. The expressions forthe marginal reliabilities R(1, R(2, R(3 etc. have been derived based on its stress- strengthmodels. Then the corresponding system reliabilities Rn have been computed numerically andpresented in tabular forms for different stress-strength distributions with different values of theirparameters. Here we consider n 3 for estimating the system reliability R3.
Nyman, R. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hegedus, D.; Tomic, B. [ENCONET Consulting GesmbH, Vienna (Austria); Lydell, B. [RSA Technologies, Vista, CA (United States)
1997-12-01
This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed `PFCA`-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies. 63 refs, 30 tabs, 22 figs.
王鹭; 张利; 王学芝
2015-01-01
As the central component of rotating machine, the performance reliability assessment and remaining useful lifetime prediction of bearing are of crucial importance in condition-based maintenance to reduce the maintenance cost and improve the reliability. A prognostic algorithm to assess the reliability and forecast the remaining useful lifetime (RUL) of bearings was proposed, consisting of three phases. Online vibration and temperature signals of bearings in normal state were measured during the manufacturing process and the most useful time-dependent features of vibration signals were extracted based on correlation analysis (feature selection step). Time series analysis based on neural network, as an identification model, was used to predict the features of bearing vibration signals at any horizons (feature prediction step). Furthermore, according to the features, degradation factor was defined. The proportional hazard model was generated to estimate the survival function and forecast the RUL of the bearing (RUL prediction step). The positive results show that the plausibility and effectiveness of the proposed approach can facilitate bearing reliability estimation and RUL prediction.
Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.
Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A
2015-01-01
The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
Lane, Ginny G.; White, Amy E.; Henson, Robin K.
2002-01-01
Conducted a reliability generalizability study on the Coopersmith Self-Esteem Inventory (CSEI; S. Coopersmith, 1967) to examine the variability of reliability estimates across studies and to identify study characteristics that may predict this variability. Results show that reliability for CSEI scores can vary considerably, especially at the…
Bahman Tarvirdizade
2014-01-01
Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.
Dhatt, Sharmistha
2016-01-01
Reliability of kinetic parameters are crucial in understanding enzyme kinetics within cellular system. The present study suggests a few cautions that need introspection for estimation of parameters like K(M), V(max) and K(I) using Lineweaver-Burk plots. The quality of IC(50) too needs a thorough reinvestigation because of its direct link with K(I) and K(M) values. Inhibition kinetics under both steady-state and non-steady-state conditions are studied and errors in estimated parameters are compared against actual values to settle the question of their adequacy.
Fang, Chih-Chiang; Yeh, Chun-Wu
2016-09-01
The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.
Zaporozhanov V.A.
2012-12-01
Full Text Available In the conditions of sporting-pedagogical practices objective estimation of potential possibilities gettings busy already on the initial stages of long-term preparation examined as one of issues of the day. The proper quantitative information allows to individualize preparation of gettings in obedience to requirements to the guided processes busy. Research purpose - logically and metrical to rotin expedience of metrical method of calculations of reliability of results of the control measurings, in-use for diagnostics of psychophysical fitness and prognosis of growth of trade gettings busy in the select type of sport. Material and methods. Analysed the results of the control measurings on four indexes of psychophysical preparedness and estimation of experts of fitness 24th gettings busy composition of children of gymnastic school. The results of initial and final inspection of gymnasts on the same control tests processed the method of mathematical statistics. Expected the metrical estimations of reliability of measurings is stability, co-ordination and informing of control information for current diagnostics and prognosis of sporting possibilities inspected. Results. Expedience of the use in these aims of metrical operations of calculation of complex estimation of the psychophysical state of gettings busy is metrology grounded. Conclusions. Research results confirm expedience of calculation of complex estimation of psychophysical features gettings busy for diagnostics of fitness in the select type of sport and trade prognosis on the subsequent stages of preparation.
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.
Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina
2015-01-01
Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis
Adela-Eliza Dumitrascu
2015-01-01
Full Text Available Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram, which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
On the efficiency and reliability of cluster mass estimates based on member galaxies
Biviano, A; Diaferio, A; Dolag, K; Girardi, M; Murante, G
2006-01-01
We study the efficiency and reliability of cluster mass estimators that are based on the projected phase-space distribution of galaxies in a cluster region. To this aim, we analyse a data-set of 62 clusters extracted from a concordance LCDM cosmological hydrodynamical simulation. Galaxies (or Dark Matter particles) are first selected in cylinders of given radius (from 0.5 to 1.5 Mpc/h) and ~200 Mpc/h length. Cluster members are then identified by applying a suitable interloper removal algorithm. Two cluster mass estimators are considered: the virial mass estimator (Mvir), and a mass estimator (Msigma) based entirely on the cluster velocity dispersion estimate. Mvir overestimates the true mass by ~10%, and Msigma underestimates the true mass by ~15%, on average, for sample sizes of > 60 cluster members. For smaller sample sizes, the bias of the virial mass estimator substantially increases, while the Msigma estimator becomes essentially unbiased. The dispersion of both mass estimates increases by a factor ~2 a...
A Review of Different Estimation Procedures in the Rasch Model. Research Report 87-6.
Engelen, R. J. H.
A short review of the different estimation procedures that have been used in association with the Rasch model is provided. These procedures include joint, conditional, and marginal maximum likelihood methods; Bayesian methods; minimum chi-square methods; and paired comparison estimation. A comparison of the marginal maximum likelihood estimation…
Ong, M M; Kihara, R; Zentler, J M; Kreitzer, B R; DeHope, W J
2007-06-27
At Lawrence Livermore National Laboratory (LLNL), our flash X-ray accelerator (FXR) is used on multi-million dollar hydrodynamic experiments. Because of the importance of the radiographs, FXR must be ultra-reliable. Flash linear accelerators that can generate a 3 kA beam at 18 MeV are very complex. They have thousands, if not millions, of critical components that could prevent the machine from performing correctly. For the last five years, we have quantified and are tracking component failures. From this data, we have determined that the reliability of the high-voltage gas-switches that initiate the pulses, which drive the accelerator cells, dominates the statistics. The failure mode is a single-switch pre-fire that reduces the energy of the beam and degrades the X-ray spot-size. The unfortunate result is a lower resolution radiograph. FXR is a production machine that allows only a modest number of pulses for testing. Therefore, reliability switch testing that requires thousands of shots is performed on our test stand. Study of representative switches has produced pre-fire statistical information and probability distribution curves. This information is applied to FXR to develop test procedures and determine individual switch reliability using a minimal number of accelerator pulses.
Picó Jesús
2007-10-01
Full Text Available Abstract Background An indirect approach is usually used to estimate the metabolic fluxes of an organism: couple the available measurements with known biological constraints (e.g. stoichiometry. Typically this estimation is done under a static point of view. Therefore, the fluxes so obtained are only valid while the environmental conditions and the cell state remain stable. However, estimating the evolution over time of the metabolic fluxes is valuable to investigate the dynamic behaviour of an organism and also to monitor industrial processes. Although Metabolic Flux Analysis can be successively applied with this aim, this approach has two drawbacks: i sometimes it cannot be used because there is a lack of measurable fluxes, and ii the uncertainty of experimental measurements cannot be considered. The Flux Balance Analysis could be used instead, but the assumption of optimal behaviour of the organism brings other difficulties. Results We propose a procedure to estimate the evolution of the metabolic fluxes that is structured as follows: 1 measure the concentrations of extracellular species and biomass, 2 convert this data to measured fluxes and 3 estimate the non-measured fluxes using the Flux Spectrum Approach, a variant of Metabolic Flux Analysis that overcomes the difficulties mentioned above without assuming optimal behaviour. We apply the procedure to a real problem taken from the literature: estimate the metabolic fluxes during a cultivation of CHO cells in batch mode. We show that it provides a reliable and rich estimation of the non-measured fluxes, thanks to considering measurements uncertainty and reversibility constraints. We also demonstrate that this procedure can estimate the non-measured fluxes even when there is a lack of measurable species. In addition, it offers a new method to deal with inconsistency. Conclusion This work introduces a procedure to estimate time-varying metabolic fluxes that copes with the insufficiency of
Peng, Samuel S.; And Others
1982-01-01
Two major analytic procedures for estimating the number of children with limited English proficiency (discriminant analysis and probabilistic procedures) are described and strengths and weaknesses discussed. Synthetic cohort analysis (an expanded version of the probabilistic approach) is described and recommended as an alternate procedure for…
Statistical Procedures for Estimating and Detecting Climate Changes
无
2006-01-01
This paper provides a concise description of the philosophy, mathematics, and algorithms for estimating,detecting, and attributing climate changes. The estimation follows the spectral method by using empirical orthogonal functions, also called the method of reduced space optimal averaging. The detection follows the linear regression method, which can be found in most textbooks about multivariate statistical techniques.The detection algorithms are described by using the space-time approach to avoid the non-stationarity problem. The paper includes (1) the optimal averaging method for minimizing the uncertainties of the global change estimate, (2) the weighted least square detection of both single and multiple signals, (3)numerical examples, and (4) the limitations of the linear optimal averaging and detection methods.
Performance of a procedure for yield estimation in fruit orchards
Aravena Zamora, Felipe; Potin, Camila; Wulfsohn, Dvora-Laio
for fruit yield estimation. In the Spring of 2009 we estimated the total number of fruit in several rows in each of 14 commercial fruit orchards growing apple, kiwi, and table grapes in central Chile. Survey times were 10-100 minutes for apples, 85 minutes for table grapes, and up to 150 minutes for kiwis....... At harvest in the Fall, the fruit were counted to obtain the true yield. Yields ranged from lows of several thousand (grape bunches), to highs of more than 40 thousand fruit (apples, kiwis). In 11 orchards, true errors less than 10% were obtained. In two highly variable orchards we obtained absolute true...
An assessment of the BEST procedure to estimate the soil water retention curve
Castellini, Mirko; Di Prima, Simone; Iovino, Massimo
2017-04-01
The Beerkan Estimation of Soil Transfer parameters (BEST) procedure represents a very attractive method to accurately and quickly obtain a complete hydraulic characterization of the soil (Lassabatère et al., 2006). However, further investigations are needed to check the prediction reliability of soil water retention curve (Castellini et al., 2016). Four soils with different physical properties (texture, bulk density, porosity and stoniness) were considered in this investigation. Sites of measurement were located at Palermo University (PAL site) and Villabate (VIL site) in Sicily, Arborea (ARB site) in Sardinia and in Foggia (FOG site), Apulia. For a given site, BEST procedure was applied and the water retention curve was estimated using the available BEST-algorithms (i.e., slope, intercept and steady), and the reference values of the infiltration constants (β=0.6 and γ=0.75) were considered. The water retention curves estimated by BEST were then compared with those obtained in laboratory by the evaporation method (Wind, 1968). About ten experiments were carried out with both methods. A sensitivity analysis of the constants β and γ within their feasible range of variability (0.110.1016/j.geoderma.2015.08.006 Lassabatère, L., Angulo-Jaramillo, R., Soria Ugalde, J.M., Cuenca, R., Braud, I., Haverkamp, R., 2006. Beerkan Estimation of Soil Transfer Parameters through Infiltration Experiments-BEST. Soil Sci. Soc. Am. J. 70:521-532. doi:10.2136/sssaj2005.0026 Smettem, K.R.J., Parlange, J.Y., Ross, P.J., Haverkamp, R., 1994. Three-dimensional analysis of infiltration from the disc infiltrometer: 1. A capillary-based theory. Water Resour. Res. 30, 2925-2929. doi:10.1029/94WR01787 Wind, G.P. 1968. Capillary conductivity data estimated by a simple method. In: Water in the Unsaturated Zone, Proceedings of Wageningen Syposium, June 1966 Vol.1 (eds P.E. Rijtema & H Wassink), pp. 181-191, IASAH, Gentbrugge, Belgium.
RELAP5/MOD3.3 Best Estimate Analyses for Human Reliability Analysis
Andrej Prošek
2010-01-01
Full Text Available To estimate the success criteria time windows of operator actions the conservative approach was used in the conventional probabilistic safety assessment (PSA. The current PSA standard recommends the use of best-estimate codes. The purpose of the study was to estimate the operator action success criteria time windows in scenarios in which the human actions are supplement to safety systems actuations, needed for updated human reliability analysis (HRA. For calculations the RELAP5/MOD3.3 best estimate thermal-hydraulic computer code and the qualified RELAP5 input model representing a two-loop pressurized water reactor, Westinghouse type, were used. The results of deterministic safety analysis were examined what is the latest time to perform the operator action and still satisfy the safety criteria. The results showed that uncertainty analysis of realistic calculation in general is not needed for human reliability analysis when additional time is available and/or the event is not significant contributor to the risk.
Singh, A.; Deeds, N.; Kelley, V.
2012-12-01
Estimating how much water is being used by various water users is key to effective management and optimal utilization of groundwater resources. This is especially true for aquifers like the Ogallala that are severely stressed and display depleting trends over the last many years. The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Water users within the 16 counties that comprise the HPWD draw from the Ogallala extensively. The HPWD has recently proposed flow-meters as well as various 'alternative methods' for water users to report water usage. Alternative methods include using a) site specific energy conversion factors to convert total amount of energy used (for pumping stations) to water pumped, b) reporting nozzle package (on center pivot irrigation systems) specifications and hours of usage, and c) reporting concentrated animal feeding operations (CAFOs). The focus of this project was to evaluate the reliability and effectiveness for each of these water use estimation techniques for regulatory purposes. Reliability and effectiveness of direct flow-metering devices was also addressed. Findings indicate that due to site-specific variability and hydrogeologic heterogeneity, alternative methods for estimating water use can have significant uncertainties associated with water use estimates. The impact of these uncertainties on overall water usage, conservation, and management was also evaluated. The findings were communicated to the Stakeholder Advisory Group and the Water Conservation District with guidelines and recommendations on how best to implement the various techniques.
Uncertainty Estimation Improves Energy Measurement and Verification Procedures
Walter, Travis; Price, Phillip N.; Sohn, Michael D.
2014-05-14
Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.
A Penalized Likelihood Approach to Parameter Estimation with Integral Reliability Constraints
Barry Smith
2015-06-01
Full Text Available Stress-strength reliability problems arise frequently in applied statistics and related fields. Often they involve two independent and possibly small samples of measurements on strength and breakdown pressures (stress. The goal of the researcher is to use the measurements to obtain inference on reliability, which is the probability that stress will exceed strength. This paper addresses the case where reliability is expressed in terms of an integral which has no closed form solution and where the number of observed values on stress and strength is small. We find that the Lagrange approach to estimating constrained likelihood, necessary for inference, often performs poorly. We introduce a penalized likelihood method and it appears to always work well. We use third order likelihood methods to partially offset the issue of small samples. The proposed method is applied to draw inferences on reliability in stress-strength problems with independent exponentiated exponential distributions. Simulation studies are carried out to assess the accuracy of the proposed method and to compare it with some standard asymptotic methods.
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
Prato, Carlo Giacomo; Rasmussen, Thomas Kjær; Nielsen, Otto Anker
2014-01-01
both congestion and reliability terms. Results illustrated that the value of time and the value of congestion were significantly higher in the peak period because of possible higher penalties for drivers being late and consequently possible higher time pressure. Moreover, results showed...... that the marginal rate of substitution between travel time reliability and total travel time did not vary across periods and traffic conditions, with the obvious caveat that the absolute values were significantly higher for the peak period. Last, results showed the immense potential of exploiting the growing...... availability of large amounts of data from cheap and enhanced technology to obtain estimates of the monetary value of different travel time components from the observation of actual behavior, with arguably potential significant impact on the realism of large-scale models....
Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M
2014-12-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.
J. Arora
2014-09-01
Full Text Available Dental ageing is important in medico legal cases when teeth are the only material available to the investigating agencies for identification of the deceased. Attrition, which is the wear of occlusal surface of tooth (a physiological change; can be used as a determinant parameter for this purpose. The present study has been undertaken to examine the reliability of attrition as a sole parameter for age estimation among North Western adult Indians. 109 (43males, 66 females single rooted freshly extracted teeth ranging in age from 18-75years were studied. Teeth were fixed, cleaned and sectioned labiolingually upto thickness of 1mm. Sections were then mounted and attrition was graded from 0-3 according to Gustafson’s method. Scores were subjected to regression equation to estimate age of an individual. Results of the present study revealed that this parameter is reliable in individuals of ≤ 60 years with an error of ±10years. However, periodontal disease severely affected the accuracy of age estimation from this parameter as is evident from the results. Statistically no significant difference was noted in absolute mean error of age in different age groups. No significant difference was observed in absolute mean error of age in both the sexes.
zahra Hooshyari
2013-04-01
Full Text Available Objective: the aim of the present study was the estimation of validation and reliability test of ASSIST instrument in Iran. Method: our research populations were Iranian alcohol and drugs users and abusers in the year 1390 that had referred to rehabilitation camps and addiction treatment centers for self-improving. Sample sizes of 2600, average age 36/5, were selected by cluster random sampling in eight provinces. The ASSIST and demographic form exercised for all of sample group. Also in order to validity estimation, 300 number of main sample we interviewed by ASI, SDS, DAST and DSM-IV criteria. Findings: ASSIST reliability estimated by Cronbach’s alpha for all of domains was between %79 to %95. Data analyses showed fair criteria, construct, discriminate and multi dimension validity. These types of validity for other domains were Discriminative validity of the ASSIST was investigated by comparison of ASSIST scores as groupes of dependence, abuser and user. There were significant confirmation between this scores and DSM-IV scores. Construct validity of the ASSIST was investigated by statistical comparison with health scores. ASSIST's cut off points classify clients in 3 categories in term of intensity of addiction. Conclusion: we surely recommend researchers to use this instrument in research and screening purposes or other situations in Iran.
A revised burial dose estimation procedure for optical dating of youngand modern-age sediments
Arnold, L.J.; Roberts, R.G.; Galbraith, R.F.; DeLong, S.B.
2009-01-01
The presence of genuinely zero-age or near-zero-age grains in modern-age and very young samples poses a problem for many existing burial dose estimation procedures used in optical (optically stimulated luminescence, OSL) dating. This difficulty currently necessitates consideration of relatively simplistic and statistically inferior age models. In this study, we investigate the potential for using modified versions of the statistical age models of Galbraith et??al. [Galbraith, R.F., Roberts, R.G., Laslett, G.M., Yoshida, H., Olley, J.M., 1999. Optical dating of single and multiple grains of quartz from Jinmium rock shelter, northern Australia: Part I, experimental design and statistical models. Archaeometry 41, 339-364.] to provide reliable equivalent dose (De) estimates for young and modern-age samples that display negative, zero or near-zero De estimates. For this purpose, we have revised the original versions of the central and minimum age models, which are based on log-transformed De values, so that they can be applied to un-logged De estimates and their associated absolute standard errors. The suitability of these 'un-logged' age models is tested using a series of known-age fluvial samples deposited within two arroyo systems from the American Southwest. The un-logged age models provide accurate burial doses and final OSL ages for roughly three-quarters of the total number of samples considered in this study. Sensitivity tests reveal that the un-logged versions of the central and minimum age models are capable of producing accurate burial dose estimates for modern-age and very young (<350??yr) fluvial samples that contain (i) more than 20% of well-bleached grains in their De distributions, or (ii) smaller sub-populations of well-bleached grains for which the De values are known with high precision. Our results indicate that the original (log-transformed) versions of the central and minimum age models are still preferable for most routine dating applications
Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States
Sullivan, Michael [Nexant Inc., Burlington, MA (United States); Schellenberg, Josh [Nexant Inc., Burlington, MA (United States); Blundell, Marshall [Nexant Inc., Burlington, MA (United States)
2015-01-01
This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.
Denise Güthlin
Full Text Available Due to time and financial constraints indices are often used to obtain landscape-scale estimates of relative species abundance. Using two different field methods and comparing the results can help to detect possible bias or a non monotonic relationship between the index and the true abundance, providing more reliable results. We used data obtained from camera traps and feces counts to independently estimate relative abundance of red foxes in the Black Forest, a forested landscape in southern Germany. Applying negative binomial regression models, we identified landscape parameters that influence red fox abundance, which we then used to predict relative red fox abundance. We compared the estimated regression coefficients of the landscape parameters and the predicted abundance of the two methods. Further, we compared the costs and the precision of the two field methods. The predicted relative abundances were similar between the two methods, suggesting that the two indices were closely related to the true abundance of red foxes. For both methods, landscape diversity and edge density best described differences in the indices and had positive estimated effects on the relative fox abundance. In our study the costs of each method were of similar magnitude, but the sample size obtained from the feces counts (262 transects was larger than the camera trap sample size (88 camera locations. The precision of the camera traps was lower than the precision of the feces counts. The approach we applied can be used as a framework to compare and combine the results of two or more different field methods to estimate abundance and by this enhance the reliability of the result.
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
2017-07-15
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.
Rater reliability of fragile X mutation size estimates: A multilaboratory analysis
Fisch, G.S. [Kings County Hospital Center and SUNY/Health Science Center, Brooklyn, NY (United States); Carpenter, N. [Chapman Institute of Medical Genetics, Tulsa, OK (United States); Maddalena, A. [Medical College of Virginia, Richmond, VA (United States)] [and others
1996-08-09
Notwithstanding the use of comparable molecular protocols, description and measurement of the fra(X) (fragile X) mutation may vary according to its appearance as a discrete band, smear, multiple bands, or mosaic. Estimation of mutation size may also differ from one laboratory to another. We report on the description of a mutation size estimate for a large sample of individuals tested for the fra(X) pre- or full mutation. Of 63 DNA samples evaluated, 45 were identified previously as fra(X) pre- or full mutations. DNA from 18 unaffected individuals was used as control. Genomic DNA was extracted from peripheral blood, and DNA fragments from each of four laboratories were sent to a single center where Southern blots were prepared and hybridized with the pE5.1 probe. Photographs from autoradiographs were returned to each site, and raters blind to the identity of the specimens were asked to evaluate them. Raters` estimates of mutation size compared favorably with a reference test. Intrarater reliability was good to excellent. Variability in mutation size estimates was comparable across band types. Variability in estimates was moderate, and was significantly correlated with absolute mutation size and band type. 9 refs., 1 fig., 3 tabs.
An improved modal pushover analysis procedure for estimating seismic demands of structures
Mao Jianmeng; Zhai Changhai; Xie Lili
2008-01-01
The pushover analysis (POA) procedure is difficult to apply to high-rise buildings, as it cannot account for the contributions of higher modes. To overcome this limitation, a modal pushover analysis (MPA) procedure was proposed by Chopra et al. (2001). However, invariable lateral force distributions are still adopted in the MPA. In this paper, an improved MPA procedure is presented to estimate the seismic demands of structures, considering the redistribution of inertia forces after the structure yields. This improved procedure is verified with numerical examples of 5-, 9- and 22-story buildings. It is concluded that the improved MPA procedure is more accurate than either the POA procedure or MPA procedure. In addition, the proposed procedure avoids a large computational effort by adopting a two-phase lateral force distribution..
Cancer risk estimation caused by radiation exposure during endovascular procedure
Kang, Y. H.; Cho, J. H.; Yun, W. S.; Park, K. H.; Kim, H. G.; Kwon, S. M.
2014-05-01
The objective of this study was to identify the radiation exposure dose of patients, as well as staff caused by fluoroscopy for C-arm-assisted vascular surgical operation and to estimate carcinogenic risk due to such exposure dose. The study was conducted in 71 patients (53 men and 18 women) who had undergone vascular surgical intervention at the division of vascular surgery in the University Hospital from November of 2011 to April of 2012. It had used a mobile C-arm device and calculated the radiation exposure dose of patient (dose-area product, DAP). Effective dose was measured by attaching optically stimulated luminescence on the radiation protectors of staff who participates in the surgery to measure the radiation exposure dose of staff during the vascular surgical operation. From the study results, DAP value of patients was 308.7 Gy cm2 in average, and the maximum value was 3085 Gy cm2. When converted to the effective dose, the resulted mean was 6.2 m Gy and the maximum effective dose was 61.7 milliSievert (mSv). The effective dose of staff was 3.85 mSv; while the radiation technician was 1.04 mSv, the nurse was 1.31 mSv. All cancer incidences of operator are corresponding to 2355 persons per 100,000 persons, which deemed 1 of 42 persons is likely to have all cancer incidences. In conclusion, the vascular surgeons should keep the radiation protection for patient, staff, and all participants in the intervention in mind as supervisor of fluoroscopy while trying to understand the effects by radiation by themselves to prevent invisible danger during the intervention and to minimize the harm.
Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)
2013-07-01
The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than
Age estimation from physiological changes of teeth: A reliable age marker?
Nishant Singh
2014-01-01
Full Text Available Background: Age is an essential factor in establishing the identity of a person. Teeth are one of the most durable and resilient part of skeleton. Gustafson (1950 suggested the use of six retrogressive dental changes that are seen with increasing age. Aim: The aim of the study was to evaluate the results and to check the reliability of modified Gustafson′s method for determining the age of an individual. Materials and Methods: Total 70 patients in the age group of 20-65 years, undergoing extraction were included in this present work. The ground sections of extracted teeth were prepared and examined under the microscope. Modified Gustafson′s criteria were used for the estimation of age. Degree of attrition, root translucency, secondary dentin deposition, cementum apposition, and root resorption were measured. A linear regression formula was obtained using different statistical equations in a sample of 70 patients. Results: The mean age difference of total 70 cases studied was ±2.64 years. Difference of actual and calculated age was significant and was observed at 5% level of significance, that is, t-cal > t-tab (t-cal = 7.72. P < 0.05, indicates that the results were statistically significant. Conclusion: The present study concludes that Gustafson′s method is a reliable method for age estimation with some proposed modifications.
Reliability of a measuring-procedure to locate a muscle-determined centric relation position.
Zonnenberg, A.J.J.; Mulder, J.; Sulkers, H.R.; Cabri, R.
2004-01-01
Although reproducibility of centric relation position, determined with an anterior deprogramming device, a leaf gauge, is widely accepted among clinicians, data confirming statistical evidence are lacking in the current literature. The objective of this study was to prove clinical reliability of a
Dynamic control of the lumbopelvic complex; lack of reliability of established test procedures
Henriksen, Marius; Lund, Hans; Bliddal, Henning
2007-01-01
Impairment of the dynamic control of the lumbopelvic complex in LBP has gained increased focus both clinically and experimentally. The objectives of this study were to determine the reliability of inclinometry as a measure of dynamic lumbopelvic control. Lumbopelvic reposition accuracy during pel...
Hansen, Flemming Yssing; Carneiro, K.
1977-01-01
A simple numerical method, which unifies the calculation of structure factors from X-ray or neutron diffraction data with the calculation of reliable pair distribution functions, is described. The objective of the method is to eliminate systematic errors in the normalizations and corrections of t...
A Robbins-Monro procedure for estimation in semiparametric regression models
Bercu, Bernard
2011-01-01
This paper is devoted to the parametric estimation of a shift together with the nonparametric estimation of a regression function in a semiparametric regression model. We implement a Robbins-Monro procedure very efficient and easy to handle. On the one hand, we propose a stochastic algorithm similar to that of Robbins-Monro in order to estimate the shift parameter. A preliminary evaluation of the regression function is not necessary for estimating the shift parameter. On the other hand, we make use of a recursive Nadaraya-Watson estimator for the estimation of the regression function. This kernel estimator takes in account the previous estimation of the shift parameter. We establish the almost sure convergence for both Robbins-Monro and Nadaraya-Watson estimators. The asymptotic normality of our estimates is also provided.
Reliable and Valid Procedures to Create an Authentic Listening Test in EFL Context
吴婷
2012-01-01
Listening testing is a universal social activity, especially for school life as well as an indispensable part to language as⁃sessment. How test takers perform during the tests may affect their entry to many significant roles both in society and schools. This paper is an attempt to explore how to design a reliable and valid listening test for particular purposes in EFL context.
A Parametric Procedure for Ultrametric Tree Estimation from Conditional Rank Order Proximity Data.
Young, Martin R.; DeSarbo, Wayne S.
1995-01-01
A new parametric maximum likelihood procedure is proposed for estimating ultrametric trees for the analysis of conditional rank order proximity data. Technical aspects of the model and the estimation algorithm are discussed, and Monte Carlo results illustrate its application. A consumer psychology application is also examined. (SLD)
Bouyssy, V.
1996-12-31
For tubular joints of offshore jacket structures, large discrepancies are observed between predicted and measured fatigue damages. The match between predictions and measurements is improved when one performs stochastic fatigue analyses. For a platform in the North Sea, however, it is found that stochastic fatigue life estimates still are inaccurate. The inaccuracy is due to uncertainties in the loading and local resistance and also in the calculation results - e.g. in the structural response and mean damage rate. By means of extensive numerical studies, it is shown how numerical uncertainties can be avoided in the calculation results. Further, it is explained that random fluctuations intrinsic in nature exist in the loading, local resistance and system properties. These random fluctuations can be accounted for in a probabilistic reliability analysis only. Then one computes the probability of a fatigue failure after a given service time instead of predicting a deterministic fatigue life. In the reliability analysis of offshore jacket structures usually only uncertainties in the loading and local resistance are taken into account. For dynamically excited jacket structures, however, stochastic analyses indicate that the influence of uncertainties in structural properties can be significant both with respect to extreme value failure and with respect to fatigue in some cases. A new numerical method is developed to estimate the reliability of offshore structures against both extreme and fatigue failures. The method allows to account for the random fluctuations in the loading, local resistance and structural properties. The suitability of the method to provide accurate estimates of failure probabilities in as few structural analyses as possible is investigated in two case studies representative for a number of offshore structures. (orig.) [Deutsch] Rohrverbindungen bei Meeresplattformen der Nordsee koennen durch Ermuedung versagen. Fuer bestehende Plattformen werden
Schmitz, Connie C; Chipman, Jeffrey G; Yoshida, Ken; Vogel, Rachel Isaksson; Sainfort, Francois; Beilman, Gregory; Clinton, Joseph; Cooper, Jimmy; Reihsen, Troy; Sweet, Robert M
2014-01-01
Reducing preventable deaths because of uncontrolled hemorrhage, tension pneumothorax, and airway loss is a priority. As part of a research initiative comparing different training models, this study evaluated the reliability and validity of a test that assesses combat medic performance during a polytrauma scenario using live animal models. Nine procedural checklists and seven global rating scales were piloted with four cohorts of soldiers (n = 94) at two U.S. training sites. Cohorts represented "novice" to "proficient" trainees. Procedure scores and a mean global score were calculated per subject. The intraclass correlation was calculated per procedure, with 0.70 as the threshold for acceptability. An overall difference among cohorts was hypothesized: Cohort 4 (proficient) > Cohort 3 (competent) > Cohort 2 (beginners) > Cohort 1 (novice) trainees. Data were analyzed using Kruskal-Wallis and analysis of variance. At Site A, intraclass correlation coefficients ranged from 74% to 93% for 6 of 9 procedures. Cohorts differed significantly on hemorrhage control, needle decompression, cricothyrotomy, amputation management, chest tube insertion, and mean global scores. Cohort 4 outperformed the others, and Cohorts 2 and 3 outperformed Cohort 1. The test differentiates novices from beginners, competent, and proficient trainees on difficult procedures and overall performance. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Nobuyuki Okahashi
2014-05-01
Full Text Available 13C metabolic flux analysis (MFA is a tool of metabolic engineering for investigation of in vivo flux distribution. A direct 13C enrichment analysis of intracellular free amino acids (FAAs is expected to reduce time for labeling experiments of the MFA. Measurable FAAs should, however, vary among the MFA experiments since the pool sizes of intracellular free metabolites depend on cellular metabolic conditions. In this study, minimal 13C enrichment data of FAAs was investigated to perform the FAAs-based MFA. An examination of a continuous culture of Escherichia coli using 13C-labeled glucose showed that the time required to reach an isotopically steady state for FAAs is rather faster than that for conventional method using proteinogenic amino acids (PAAs. Considering 95% confidence intervals, it was found that the metabolic flux distribution estimated using FAAs has a similar reliability to that of the PAAs-based method. The comparative analysis identified glutamate, aspartate, alanine and phenylalanine as the common amino acids observed in E. coli under different culture conditions. The results of MFA also demonstrated that the 13C enrichment data of the four amino acids is required for a reliable analysis of the flux distribution.
Martin-Khan, Melinda G; Edwards, Helen; Wootton, Richard; Counsell, Steven R; Varghese, Paul; Lim, Wen Kwang; Darzins, Peteris; Dakin, Lucy; Klein, Kerenaftali; Gray, Leonard C
2017-08-21
To determine whether geriatric triage decisions made using a comprehensive geriatric assessment (CGA) performed online are less reliable than face-to-face (FTF) decisions. Multisite noninferiority prospective cohort study. Two specialist geriatricians assessed individuals sequentially referred for an acute care geriatric consultation. Participants were allocated to one FTF assessment and an additional assessment (FTF or online (OL)), creating two groups-two FTF (FTF-FTF, n = 81) or online and FTF (OL-FTF, n = 85). Three acute care public hospitals in two Australian states. Admitted individuals referred for CGA. Nurse-administered CGA, based on the interRAI Acute Care assessment system accessed online and other online clinical data such as pathology results and imaging enabling geriatricians to review participants' information and provide input into their care from a distance. The primary decision subjected to this analysis was referral for permanent residential care. Geriatricians also recorded recommendations for referrals and variations for medication management and judgment regarding prognosis at discharge and after 3 months. Overall percentage agreement was 88% (n = 71) for the FTF-FTF group and 91% (n = 77) for the OL-FTF group. The difference in agreement between the FTF-FTF and OL-FTF groups was -3%, indicating that there was no difference between the methods of assessment. Judgements made regarding diagnoses of geriatric syndromes, medication management, and prognosis (with regard to hospital outcome and location at 3 months) were found to be equally reliable in each mode of consultation. Geriatric assessment performed online using a nurse-administered structured CGA system was no less reliable than conventional assessment in making clinical triage decisions. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.
Nielsen, Søren R.K.; Sørensen, John Dalsgaard; Thoft-Christensen, Palle
1983-01-01
A method is presented for life-time reliability' estimates of randomly excited yielding systems, assuming the structure to be safe, when the plastic deformations are confined below certain limits. The accumulated plastic deformations during any single significant loading history are considered...... to be the outcome of identically distributed, independent stochastic variables,for which a model is suggested. Further assuming the interarrival times of the elementary loading histories to be specified by a Poisson process, and the duration of these to be small compared to the designed life-time, the accumulated...... plastic deformation during several loadings can be modelled as a filtered Poisson process. Using the Markov property of this quantity the considered first-passage problem as well as the related extreme distribution problems are then solved numerically, and the results are compared to simulation studies....
Ali Abd Elhakam Aliabdo
2012-09-01
Full Text Available This study aims to investigate the relationships between Schmidt hardness rebound number (RN and ultrasonic pulse velocity (UPV versus compressive strength (fc of stones and bricks. Four types of rocks (marble, pink lime stone, white lime stone and basalt and two types of burned bricks and lime-sand bricks were studied. Linear and non-linear models were proposed. High correlations were found between RN and UPV versus compressive strength. Validation of proposed models was assessed using other specimens for each material. Linear models for each material showed good correlations than non-linear models. General model between RN and compressive strength of tested stones and bricks showed a high correlation with regression coefficient R2 value of 0.94. Estimation of compressive strength for the studied stones and bricks using their rebound number and ultrasonic pulse velocity in a combined method was generally more reliable than using rebound number or ultrasonic pulse velocity only.
Kang, D. I.; Sung, T. Y.; Park, J. H.; Kim, T. W.; Han, S. H.; Kim, K. Y.; Yang, J. E.; Jung, W. D.; Lee, Y. H.; Hwang, M. J.
1997-09-01
A human reliability analysis (HRA) procedure is developed for a low power/shutdown probalistic safety assessment (PSA) in pressurized light water reactors. At first, the HRA procedure developed is based on the two major current methods: THERP (technique for human error rate prediction) and SHARP (systematic human action reliability procedure). Then, it focuses on the specific situation of low power and shutdown operation of pressurized light water reactors. Major characteristics of the HRA procedure are as follows; 1) The use of the work sheet developed increase the plausibility and credibility of the quantification process of human actions and enable use to trace easily it. 2) The explicit use of decision tree could partly eliminate the possible subjectiveness in human reliability analyst`s judgement used for HRA. It is expected that the HRA procedure developed allow human reliability analyst to perform a systematic and consistent HRA. (author). 26 refs., 13 tabs., 8 figs.
Haaning, J; Oxvig, C; Overgaard, Michael Toft;
1997-01-01
Yeast is widely used in molecular biology. Heterologous expression of recombinant proteins in yeast involves screening of a large number of recombinants. We present an easy and reliable procedure for amplifying genomic DNA from freshly grown cells of the methylotrophic yeast Pichia pastoris...... by means of PCR without any prior DNA purification steps. This method involves a simple boiling step of whole yeast cells in the presence of detergent, and subsequent amplification of genomic DNA using short sequencing primers in a polymerase chain reaction assay with a decreasing annealing temperature...
da Cruz, A C S; Couto, B C; Nascimento, I A; Pereira, S A; Leite, M B N L; Bertoletti, E; Zagatto, P
2007-05-01
In spite of the consideration that toxicity testing is a reduced approach to measure the effects of pollutants on ecosystems, the early-life-stage (ELS) tests have evident ecological relevance because they reflect the possible reproductive impairment of the natural populations. The procedure and validation of Crassostrea rhizophorae embryonic development test have shown that it meets the same precision as other U.S. EPA tests, where EC(50) is generally used as a toxicological endpoint. However, the recognition that EC(50) is not the best endpoint to assess contaminant effects led U.S. EPA to recently suggest EC(25) as an alternative to estimate xenobiotic effects for pollution prevention. To provide reliability to the toxicological test results on C. rhizophorae embryos, the present work aimed to establish the critical effect level for this test organism, based on its reaction to reference toxicants, by using the statistical method proposed by Norberg-King (Inhibition Concentration, version 2.0). Oyster embryos were exposed to graded series of reference toxicants (ZnSO(4) x 7H(2)O; AgNO(3); KCl; CdCl(2)H(2)O; phenol, 4-chlorophenol and dodecyl sodium sulphate). Based on the obtained results, the critical value for C. rhizophorae embryonic development test was estimated as EC(15). The present research enhances the emerging consensus that ELS tests data would be adequate for estimating the chronic safe concentrations of pollutants in the receiving waters. Based on recommended criteria and on the results of the present research, zinc sulphate and 4-chlorophenol have been pointed out, among the inorganic and organic compounds tested, as the best reference toxicants for C. rhizophorae ELS-test.
Bessière, Charles; Trojani, Christophe; Carles, Michel; Mehta, Saurabh S; Boileau, Pascal
2014-08-01
Arthroscopic Bankart repair and open Latarjet bone block procedure are widely considered mainstays for surgical treatment of recurrent anterior shoulder instability. The choice between these procedures depends mainly on surgeon preference or training rather than published evidence. We compared patients with recurrent posttraumatic anterior shoulder instability treated with arthroscopic Bankart or open Latarjet procedure in terms of (1) frequency and timing of recurrent instability, (2) risk factors for recurrent instability, and (3) patient-reported outcomes. In this retrospective comparative study, we paired 93 patients undergoing open Latarjet procedures with 93 patients undergoing arthroscopic Bankart repairs over the same period for posttraumatic anterior shoulder instability by one of four surgeons at the same center. Both groups were comparable except that patients in the Latarjet group had more glenoid lesions and more instability episodes preoperatively. Minimum followup was 4 years (mean, 6 years; range, 4-10 years). Patients were assessed with a questionnaire, including stability, Rowe score, and return to sports. Recurrent instability was defined as at least one episode of recurrent dislocation or subluxation. Return to sports was evaluated using a 0% to 100% scale that patients completed after recovery from surgery. Various risk factors for recurrent instability were also analyzed. At latest followup, 10% (nine of 93) in the Latarjet group and 22% (20 of 93) in the Bankart group demonstrated recurrent instability (p = 0.026; odds ratio, 0.39; 95% CI, 0.17-0.91). Ten recurrences in the Bankart group (50%) occurred after 2 years, compared to only one (11%) in the Latarjet group. Reoperation rate was 6% and 7% in the Bankart and Latarjet groups, respectively. In both groups, patients younger than 20 years had higher recurrence risk (p = 0.019). In the Bankart group, independent factors predictive for recurrence were practice of competitive sports and
Graham, James M.
2006-01-01
Coefficient alpha, the most commonly used estimate of internal consistency, is often considered a lower bound estimate of reliability, though the extent of its underestimation is not typically known. Many researchers are unaware that coefficient alpha is based on the essentially tau-equivalent measurement model. It is the violation of the…
Toward reliable automated estimates of earthquake source properties from body wave spectra
Ross, Zachary E.; Ben-Zion, Yehuda
2016-06-01
We develop a two-stage methodology for automated estimation of earthquake source properties from body wave spectra. An automated picking algorithm is used to window and calculate spectra for both P and S phases. Empirical Green's functions are stacked to minimize nongeneric source effects such as directivity and are used to deconvolve the spectra of target earthquakes for analysis. In the first stage, window lengths and frequency ranges are defined automatically from the event magnitude and used to get preliminary estimates of the P and S corner frequencies of the target event. In the second stage, the preliminary corner frequencies are used to update various parameters to increase the amount of data and overall quality of the deconvolved spectral ratios (target event over stacked Empirical Green's function). The obtained spectral ratios are used to estimate the corner frequencies, strain/stress drops, radiated seismic energy, apparent stress, and the extent of directivity for both P and S waves. The technique is applied to data generated by five small to moderate earthquakes in southern California at hundreds of stations. Four of the five earthquakes are found to have significant directivity. The developed automated procedure is suitable for systematic processing of large seismic waveform data sets with no user involvement.
Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema
2015-01-01
Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.
Reliability analysis of road network for estimation of public evacuation time around NPPs
Bang, Sun-Young; Lee, Gab-Bock; Chung, Yang-Geun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)
2007-07-01
The most strong protection method of radiation emergency preparedness is the evacuation of the public members when a great deal of radioactivity is released to environment. After the Three Mile Island (TMI) nuclear power plant meltdown in the United States and Chernobyl nuclear power plant disaster in the U.S.S.R, many advanced countries including the United States and Japan have continued research on estimation of public evacuation time as one of emergency countermeasure technologies. Also in South Korea, 'Framework Act on Civil Defense: Radioactive Disaster Preparedness Plan' was established in 1983 and nuclear power plants set up a radiation emergency plan and have regularly carried out radiation emergency preparedness trainings. Nonetheless, there is still a need to improve technology to estimate public evacuation time by executing precise analysis of traffic flow to prepare practical and efficient ways to protect the public. In this research, road network for Wolsong and Kori NPPs was constructed by CORSIM code and Reliability analysis of this road network was performed.
Bardot, Leon; McClelland, Elizabeth
2000-10-01
The mode of origin of volcaniclastic deposits can be difficult to determine from field constraints, and the palaeomagnetic technique of emplacement temperature (Te) determination provides a powerful discriminatory test for primary volcanic origin. This technique requires that the low-blocking-temperature (Tb) component of remanence in the direction of the Earth's field in inherited lithic clasts is of thermal origin and was acquired during transport and cooling in a hot pyroclastic flow; otherwise, the Te determination may be inaccurate. If the low-Tb component is not of thermal origin it may be a viscous remanent magnetization (VRM) or a chemical remanent magnetization (CRM). The acquisition of a VRM depends on the duration of exposure to an applied magnetic field, and thus the laboratory unblocking temperature (Tub) of a VRM of a certain age imposes a minimum Te that can be determined for that deposit. Palaeointensity experiments were carried out to assess the magnetic origin (pTRM, CRM, or a combination of both) of the low-Tb component in a number of samples from pyroclastic deposits from Santorini, Greece. Seven of the 24 samples used in these experiments passed the stringent tests for reliable palaointensity determination. These values demonstrated, for six of the samples, that the low-Tb component was of thermal origin and therefore that the estimate of Te was valid. In the other 17 samples, valuable information was gained about the characteristics of the magnetic alteration that occurred during the palaeointensity experiments, allowing assessment of the reliability of Te estimates in these cases. These cases showed that if a CRM is present it has a direction parallel to the applied field, and not parallel to the direction of the parent grain. They also show that, even if a CRM is present, it does not necessarily affect the estimate of Te. Two samples used in these experiments displayed curvature between their two components of magnetization. Data from this
Synthesis of [{sup 123}I]IBZM: a reliable procedure for routine clinical studies
Zea-Ponce, Yolanda E-mail: yolanda@neuron.cpmc.columbia.edu; Laruelle, Marc
1999-08-01
The single photon emission computed tomography (SPECT) D{sub 2}/D{sub 3} receptor radiotracer [{sup 123}I]IBZM, is prepared by electrophilic radioiodination of the precursor BZM with high-purity sodium [{sup 123}I]iodide in the presence of diluted peracetic acid. However, in our hands, the most commonly used procedure for this radiosynthesis produced variable and inconsistent labeling yields, to such extent that it became inappropriate for routine clinical studies. Our goal was to modify the labeling procedure, to obtain consistently better labeling and radiochemical yields. The best conditions found for the radioiodination were as follows: 50 {mu}g precursor in 50 {mu}L EtOH mixed with buffer pH 2; Na[{sup 123}I]I in 0.1 M NaOH (<180 {mu}L), 50 {mu}L peracetic acid diluted solution, heating at 65 deg. C for 14 min. Purification was achieved by solid phase extraction (SPE) and reverse-phase high performance liquid chromatography (HPLC). Under these conditions, labeling yield average was 76{+-}4% (n=31); radiochemical yield was 69{+-}4% and radiochemical purity was 98{+-}1%. With larger volumes of the Na[{sup 123}I]I solution the yields were consistent but lower. For example, for volumes between 417 and 523 {mu}L the labeling yield was 61{+-}5% (n=21), radiochemical yield was 56{+-} 5% and radiochemical purity was 98{+-}1%.
Flávio Chaimowicz
Full Text Available The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil.We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815. Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%.The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.
Lange, R; Thalbourne, M A; Houran, J; Storm, L
2000-12-01
The concept of transliminality ("a hypothesized tendency for psychological material to cross thresholds into or out of consciousness") was anticipated by William James (1902/1982), but it was only recently given an empirical definition by Thalbourne in terms of a 29-item Transliminality Scale. This article presents the 17-item Revised Transliminality Scale (or RTS) that corrects age and gender biases, is unidimensional by a Rasch criterion, and has a reliability of.82. The scale defines a probabilistic hierarchy of items that address magical ideation, mystical experience, absorption, hyperaesthesia, manic experience, dream interpretation, and fantasy proneness. These findings validate the suggestions by James and Thalbourne that some mental phenomena share a common underlying dimension with selected sensory experiences (such being overwhelmed by smells, bright lights, sights, and sounds). Low scores on transliminality remain correlated with "tough mindedness" in on Cattell 16PF test, as well as "self-control" and "rule consciousness," whereas high scores are associated with "abstractedness" and an "openness to change" on that test. An independent validation study confirmed the predictions implied by our definition of transliminality. Implications for test construction are discussed.
Recent advances of VADASE to enhance reliability and accuracy of real-time displacements estimation
Savastano, Giorgio; Fratarcangeli, Francesca; Chiara D'Achille, Maria; Mazzoni, Augusto; Crespi, Mattia
2017-04-01
. Moreover, a statistical test, based on the hypothesis of a constant mean level noise of the VADASE velocity estimates over few minutes, and a robust estimation procedure were introduced; they allow both to estimate the duration of an earthquake and the overall coseismic displacement. The three new VADASE advances were successfully applied to the GPS data collected during the recent three strong earthquakes occurred in Central Italy on August 24, October 26 and 30, 2016, and the results are here presented and discussed.
张玉卓
1998-01-01
The quantitative evaluation of errors involved in a particular numerical modelling is of prime importance for the effectiveness and reliability of the method. Errors in Distinct Element Modelling are generated mainly through three resources as simplification of physical model, determination of parameters and boundary conditions. A measure of errors which represent the degree of numerical solution 'close to true value' is proposed through fuzzy probability in this paper. The main objective of this paper is to estimate the reliability of Distinct Element Method in rock engineering practice by varying the parameters and boundary conditions. The accumulation laws of standard errors induced by improper determination of parameters and boundary conditions are discussed in delails. Furthermore, numerical experiments are given to illustrate the estimation of fuzzy reliability. Example shows that fuzzy reliability falls between 75%-98% when the relative standard errors of input data is under 10 %.
Räder, Sune B E W; Jørgensen, Erik; Bech, Bo;
2011-01-01
media volume were used as indicators of quality of performance. We analyzed data from 4,200 coronary angiographies. Performance curves of seven trainees were compared with recommended reference levels and to those of seven interventional cardiologists. Results: On average, the number of procedures.......001 for all parameters. To approach the experts' level of DAP and contrast media use, trainees need 394 and 588 procedures, respectively. Performance curves showed large individual differences in the development of competence. Conclusion: On average, trainees needed 300 procedures to reach sufficient level...... needed for trainees to reach recommended reference levels was estimated as 226 and 353, for DAP and use of contrast media, respectively. After 300 procedures, trainees' procedure time, fluoroscopy time, DAP, and contrast media volume were significantly higher compared with experts' performance, P
Ratnayake, M; Obertová, Z; Dose, M; Gabriel, P; Bröker, H M; Brauckmann, M; Barkus, A; Rizgeliene, R; Tutkuviene, J; Ritz-Timme, S; Marasciuolo, L; Gibelli, D; Cattaneo, C
2014-09-01
In cases of suspected child pornography, the age of the victim represents a crucial factor for legal prosecution. The conventional methods for age estimation provide unreliable age estimates, particularly if teenage victims are concerned. In this pilot study, the potential of age estimation for screening purposes is explored for juvenile faces. In addition to a visual approach, an automated procedure is introduced, which has the ability to rapidly scan through large numbers of suspicious image data in order to trace juvenile faces. Age estimations were performed by experts, non-experts and the Demonstrator of a developed software on frontal facial images of 50 females aged 10-19 years from Germany, Italy, and Lithuania. To test the accuracy, the mean absolute error (MAE) between the estimates and the real ages was calculated for each examiner and the Demonstrator. The Demonstrator achieved the lowest MAE (1.47 years) for the 50 test images. Decreased image quality had no significant impact on the performance and classification results. The experts delivered slightly less accurate MAE (1.63 years). Throughout the tested age range, both the manual and the automated approach led to reliable age estimates within the limits of natural biological variability. The visual analysis of the face produces reasonably accurate age estimates up to the age of 18 years, which is the legally relevant age threshold for victims in cases of pedo-pornography. This approach can be applied in conjunction with the conventional methods for a preliminary age estimation of juveniles depicted on images.
Bioreactance is a reliable method for estimating cardiac output at rest and during exercise.
Jones, T W; Houghton, D; Cassidy, S; MacGowan, G A; Trenell, M I; Jakovljevic, D G
2015-09-01
Bioreactance is a novel noninvasive method for cardiac output measurement that involves analysis of blood flow-dependent changes in phase shifts of electrical currents applied across the thorax. The present study evaluated the test-retest reliability of bioreactance for assessing haemodynamic variables at rest and during exercise. 22 healthy subjects (26 (4) yrs) performed an incremental cycle ergometer exercise protocol relative to their individual power output at maximal O2 consumption (Wmax) on two separate occasions (trials 1 and 2). Participants cycled for five 3 min stages at 20, 40, 60, 80 and 90% Wmax. Haemodynamic and cardiorespiratory variables were assessed at rest and continuously during the exercise protocol. Cardiac output was not significantly different between trials at rest (P=0.948), or between trials at any stage of the exercise protocol (all P>0.30). There was a strong relationship between cardiac output estimates between the trials (ICC=0.95, Prest (P=0.989) or during exercise (all P>0.15), and strong relationships between trials were found (ICC=0.83, Prest and during different stages of graded exercise testing including maximal exertion. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
On the reliable estimation of heat transfer coefficients for nanofluids in a microchannel
Irwansyah, Ridho; Cierpka, Christian; Kähler, Christian J.
2016-09-01
Nanofluids (base fluid and nanoparticles) can enhance the heat transfer coefficient h in comparison to the base fluid. This open the door for the design of efficient cooling system for microelectronics component for instance. Since theoretical Nusselt number correlations for microchannels are not available, the direct method using an energy balance has to be applied to determine h. However, for low nanoparticle concentrations the absolute numbers are small and hard to measure. Therefore, the study examines the laminar convective heat transfer of Al2O3-water nanofluids in a square microchannel with a cross section of 0.5 × 0.5 mm2 and a length of 30 mm under constant wall temperature. The Al2O3 nanoparticles have a diameter size distribution of 30-60 nm. A sensitivity analysis with error propagation was done to reduce the error for a reliable heat transfer coefficient estimation. An enhancement of heat transfer coefficient with increasing nanoparticles volume concentration was confirmed. A maximum enhancement of 6.9% and 21% were realized for 0.6% Al2O3-water and 1% Al2O3-water nanofluids.
Abdelgadir, Jihad; Tran, Tu; Muhindo, Alex; Obiga, Doomwin; Mukasa, John; Ssenyonjo, Hussein; Muhumza, Michael; Kiryabwire, Joel; Haglund, Michael M; Sloan, Frank A
2017-05-01
There are no data on cost of neurosurgery in low-income and middle-income countries. The objective of this study was to estimate the cost of neurosurgical procedures in a low-resource setting to better inform resource allocation and health sector planning. In this observational economic analysis, microcosting was used to estimate the direct and indirect costs of neurosurgical procedures at Mulago National Referral Hospital (Kampala, Uganda). During the study period, October 2014 to September 2015, 1440 charts were reviewed. Of these patients, 434 had surgery, whereas the other 1006 were treated nonsurgically. Thirteen types of procedures were performed at the hospital. The estimated mean cost of a neurosurgical procedure was $542.14 (standard deviation [SD], $253.62). The mean cost of different procedures ranged from $291 (SD, $101) for burr hole evacuations to $1,221 (SD, $473) for excision of brain tumors. For most surgeries, overhead costs represented the largest proportion of the total cost (29%-41%). This is the first study using primary data to determine the cost of neurosurgery in a low-resource setting. Operating theater capacity is likely the binding constraint on operative volume, and thus, investing in operating theaters should achieve a higher level of efficiency. Findings from this study could be used by stakeholders and policy makers for resource allocation and to perform economic analyses to establish the value of neurosurgery in achieving global health goals. Copyright © 2017 Elsevier Inc. All rights reserved.
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Mattia Manica
2017-03-01
probability obtained by introducing these estimates in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0 1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. Discussion This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions.
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Manica, Mattia; Rosà, Roberto; Della Torre, Alessandra; Caputo, Beniamino
2017-01-01
in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0 1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions.
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Manica, Mattia; Rosà, Roberto; della Torre, Alessandra
2017-01-01
introducing these estimates in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0 1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. Discussion This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions. PMID:28321362
1990-11-01
findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate
Myers, M.L.; Fuller, L.C.
1979-01-01
Revised guidelines are presented for estimating annual nonfuel operation and maintenance costs for large steam-electric power plants, specifically light-water-reactor plants and coal-fired plants. Previous guidelines were published in October 1975 in ERDA 76-37, a Procedure for Estimating Nonfuel Operating and Maintenance Costs for Large Steam-Electric Power Plants. Estimates for coal-fired plants include the option of limestone slurry scrubbing for flue gas desulfurization. A computer program, OMCOST, is also presented which covers all plant options.
Using operational data to estimate the reliable yields of water-supply wells
Misstear, Bruce D. R.; Beeson, Sarah
The reliable yield of a water-supply well depends on many different factors, including the properties of the well and the aquifer; the capacities of the pumps, raw-water mains, and treatment works; the interference effects from other wells; and the constraints imposed by ion licences, water quality, and environmental issues. A relatively simple methodology for estimating reliable yields has been developed that takes into account all of these factors. The methodology is based mainly on an analysis of water-level and source-output data, where such data are available. Good operational data are especially important when dealing with wells in shallow, unconfined, fissure-flow aquifers, where actual well performance may vary considerably from that predicted using a more analytical approach. Key issues in the yield-assessment process are the identification of a deepest advisable pumping water level, and the collection of the appropriate well, aquifer, and operational data. Although developed for water-supply operators in the United Kingdom, this approach to estimating the reliable yields of water-supply wells using operational data should be applicable to a wide range of hydrogeological conditions elsewhere. Résumé La productivité d'un puits capté pour l'adduction d'eau potable dépend de différents facteurs, parmi lesquels les propriétés du puits et de l'aquifère, la puissance des pompes, le traitement des eaux brutes, les effets d'interférences avec d'autres puits et les contraintes imposées par les autorisations d'exploitation, par la qualité des eaux et par les conditions environnementales. Une méthodologie relativement simple d'estimation de la productivité qui prenne en compte tous ces facteurs a été mise au point. Cette méthodologie est basée surtout sur une analyse des données concernant le niveau piézométrique et le débit de prélèvement, quand ces données sont disponibles. De bonnes données opérationnelles sont particuli
Itagaki, H. [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H.; Ito, S. [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M.
1996-12-31
Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.
Simplified procedure for the estimation of (COP)sub(R) for heat pumps
Patwardhan, V.R.; Patwardhan, V.S.
1987-01-01
A simplified procedure for estimating the Rankine coefficient of performance for vapor compression heat pumps is presented. This procedure does not need detailed thermodynamic data. It requires only the liquid specific heat and the latent heat of vaporisation at the evaporating temperature. This procedure is tested by its application to eight potential heat pump working fluids for which exact (COP)sub(R) values have been reported based on detailed thermodynamic data. Very wide ranges of evaporating and condensing temperatures are covered. The results indicate that the present procedures can predict (COP)sub(R) values within 3-4%. Useful correlations for calculating the liquid specific heat and the latent heat of vaporisation for these working fluids are also presented, which cover temperature ranges of importance for heat pump applications.
Nativi, S; Mazzetti, P
2004-01-01
In a previous work, an operative procedure to estimate precipitable and liquid water in non-raining conditions over sea was developed and assessed. The procedure is based on a fast non-linear physical inversion scheme and a forward model; it is valid for most of satellite microwave radiometers and it also estimates water effective profiles. This paper presents two improvements of the procedure: first, a refinement to provide modularity of the software components and portability across different computation system architectures; second, the adoption of the CERN MINUIT minimisation package, which addresses the problem of global minimisation but is computationally more demanding. Together with the increased computational performance that allowed to impose stricter requirements on the quality of fit, these refinements improved fitting precision and reliability, and allowed to relax the requirements on the initial guesses for the model parameters. The re-analysis of the same data-set considered in the previous pap...
MacDonell, Christopher William; Ivanova, Tanya Dimitrova; Garland, S Jayne
2007-05-15
The reliability of the afterhyperpolarization (AHP) time course, as estimated by the interval death rate (IDR) analysis was evaluated both within and between investigators. The IDR analysis uses the firing history of a single motor unit train at low tonic firing rates to calculate an estimate of the AHP time course [Matthews PB. Relationship of firing intervals of human motor units to the trajectory of post-spike after-hyperpolarization and synaptic noise. J Physiol 1996;492:597-628]. Single motor unit trains were collected from the tibialis anterior (TA) to determine intra-rater reliability (within investigator). Data from the first dorsal interosseus (FDI), collected in a previous investigation [Gossen ER, Ivanova TD, Garland SJ. The time course of the motoneurone afterhyperpolarization is related to motor unit twitch speed in human skeletal muscle. J Physiol 2003;552:657-64], were used to examine the inter-rater reliability (between investigators). The lead author was blinded to the original time constants and file identities for the re-analysis. The intra-rater reliability of the AHP time constant in the TA data was high (r(2)=0.88; pFDI data was also strong (r(2)=0.92; pFDI. It is concluded that the interval death rate analysis is a reliable tool for estimating the AHP time course with experienced investigators.
Almeida, Mariana R; Fidelis, Carlos H V; Barata, Lauro E S; Poppi, Ronei J
2013-12-15
The Amazon tree Aniba rosaeodora Ducke (rosewood) provides an essential oil valuable for the perfume industry, but after decades of predatory extraction it is at risk of extinction. The extraction of the essential oil from wood implies the cutting of the tree, and then the study of oil extracted from the leaves is important as a sustainable alternative. The goal of this study was to test the applicability of Raman spectroscopy and Partial Least Square Discriminant Analysis (PLS-DA) as means to classify the essential oil extracted from different parties (wood, leaves and branches) of the Brazilian tree A. rosaeodora. For the development of classification models, the Raman spectra were split into two sets: training and test. The value of the limit that separates the classes was calculated based on the distribution of samples of training. This value was calculated in a manner that the classes are divided with a lower probability of incorrect classification for future estimates. The best model presented sensitivity and specificity of 100%, predictive accuracy and efficiency of 100%. These results give an overall vision of the behavior of the model, but do not give information about individual samples; in this case, the confidence interval for each sample of classification was also calculated using the resampling bootstrap technique. The methodology developed have the potential to be an alternative for standard procedures used for oil analysis and it can be employed as screening method, since it is fast, non-destructive and robust.
Huang, Liping; Crino, Michelle; Wu, Jason Hy
2016-01-01
BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean p...
Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P
2017-03-01
In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method.
Modifications of the heliostat procedures for irradiance estimates from satellite images
Beyer, H.G.; Costanzo, Claudio; Heinemann, Detlev [Oldenburg Univ. (Germany). Fachbereich 8 - Physik
1996-03-01
Images taken by geostationary satellites may be used to estimate solar irradiance fluxes at the earth`s surface. The Heliostat method is a widely applied procedure for this task. It is based on the empirical correlation between a satellite derived cloud index and the irradiance at the ground. Modifications to this procedure that may reduce the temporal variability of the correlation are presented. The modified method may open the way to the use of a generic relation of cloud index and global irradiance. (author)
Strauss, Keith J; Racadio, John M; Johnson, Neil; Patel, Manish; Nachabe, Rami A
2015-06-01
The objective of our study was to survey radiation dose indexes of pediatric peripheral and abdominal fluoroscopically guided procedures from which estimates of diagnostic reference levels (DRLs) can be proposed for both a standard fluoroscope and a novel fluoroscope with advanced image processing and lower radiation dose rates. Radiation dose structured reports were retrospectively collected for 408 clinical pediatric cases: Half of the procedures were performed with a standard imaging technology and half with a novel x-ray technology. Dose-area product (DAP), air Kerma (AK), fluoroscopy time, number of digital subtraction angiography images, and patient mass were collected to calculate and normalize radiation dose indexes for procedures completed with the standard and novel fluoroscopes. The study population was composed of 180 and 175 patients who underwent procedures with the standard and novel technology, respectively. The 21 different types of pediatric peripheral and abdominal interventional procedures produced 408 total studies. Median ages, mass and body mass index, fluoroscopy time per procedure, and total number of recorded images for the standard and novel technologies were not statistically different. The area of the x-ray beams was square at the level of the patient with a dimension of 10-13 cm. The dose reduction achieved with the novel fluoroscope ranged from 18% to 51% of the dose required with the standard fluoroscope. The median DAP and AK patient dose indexes were 0.38 Gy · cm(2) and 4.00 mGy, respectively, for the novel fluoroscope. Estimates of dose indexes of pediatric peripheral and abdominal fluoroscopically guided, clinical procedures should assist in the development of DRLs to foster management of radiation doses of pediatric patients.
Annegret Grimm
Full Text Available Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK. If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2. Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to
Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey
2015-01-01
Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…
Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey
2015-01-01
Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…
Mohamad Sahebalzamani
2012-04-01
Full Text Available Background: To evaluate the validity and reliability of assessing the performance of nursing students using the Direct Observation of Procedural Skills (DOPS.Materials and Method: This research was conducted on 55 nursing internship students in 8 procedures. A DOPS consisted of an assessor observing a student when performing skills, completing a checklist with the student and providing verbal feedback. The procedures were selected among the core skills of nursing according to the views of faculty members. Content validity, criterion validity (correlation the average scores of nursing clinical and theoretical courses separately with DOPS score, relation of each item with DOPS, construct validity (inspection of internal construction, reliability (examination of internal consistency, inter-rater reliability were examined. Results: Correlation of DOPS scores with the theoretical and clinical average scores were 0.117 (p=0.429 and 0.376 (p= 0.008 respectively. There has been a significant relation between each skill and DOPS total score (p= 0.001 that indicates a desired internal construction of the exercise. The reliability of the exercise was measured as 94% by Cronbach alpha coefficient. Minimum and maximum correlation coefficient in the inter-rater reliability were 42% and 84% respectively which were significant in all cases (p=0 .001. Conclusion: In conclusion, our results showed that DOPS has the validity and reliability for objective evaluation of procedural skills in nursing
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level.
An improved sample-and-hold reconstruction procedure for estimation of power spectra from LDA data
Simon, Laurent [Laboratoire d' Acoustique de l' Universite du Maine, UMR-CNRS 6613, Avenue Messiaen, 72085, Le Mans (France); Fitzpatrick, John [Mechanical Engineering Department, Trinity College, Dublin 2 (Ireland)
2004-08-01
Techniques for deriving the auto or power spectrum (PSD) of turbulence from laser Doppler anemometry (LDA) measurements are reviewed briefly. The low pass filter and step noise errors associated with the sample-and-hold process are considered and a discrete version of the low pass filter for the resampled signal is derived. This is then used to develop a procedure by which the PSD estimates obtained from sample and hold measurements can be corrected. The application of the procedures is examined using simulated data and the results show that the frequency range of the analysis can be extended beyond the Nyquist frequency based on the mean sample rate. The results are shown to be comparable to those obtained using the method of Nobach et al. (1998) but the new procedures are more straightforward to implement. The technique is then used to determine the PSD of real LDA data and the results are compared with those from a hot wire anemometer. (orig.)
Bjork, J; Brown, C; Friedlander, H; Schiffman, E; Neitzel, D
2016-08-03
Many disease surveillance programs, including the Massachusetts Department of Public Health and the Minnesota Department of Health, are challenged by marked increases in Lyme disease (LD) reports. The purpose of this study was to retrospectively analyse LD reports from 2005 through 2012 to determine whether key epidemiologic characteristics were statistically indistinguishable when an estimation procedure based on sampling was utilized. Estimates of the number of LD cases were produced by taking random 20% and 50% samples of laboratory-only reports, multiplying by 5 or 2, respectively, and adding the number of provider-reported confirmed cases. Estimated LD case counts were compared to observed, confirmed cases each year. In addition, the proportions of cases that were male, were ≤12 years of age, had erythema migrans (EM), had any late manifestation of LD, had a specific late manifestation of LD (arthritis, cranial neuritis or carditis) or lived in a specific region were compared to the proportions of cases identified using standard surveillance to determine whether estimated proportions were representative of observed proportions. Results indicate that the estimated counts of confirmed LD cases were consistently similar to observed, confirmed LD cases and accurately conveyed temporal trends. Most of the key demographic and disease manifestation characteristics were not significantly different (P < 0.05), although estimates for the 20% random sample demonstrated greater deviation than the 50% random sample. Applying this estimation procedure in endemic states could conserve limited resources by reducing follow-up effort while maintaining the ability to track disease trends.
A digital procedure for ground water recharge and discharge pattern recognition and rate estimation.
Lin, Yu-Feng; Anderson, Mary P
2003-01-01
A digital procedure to estimate recharge/discharge rates that requires relatively short preparation time and uses readily available data was applied to a setting in central Wisconsin. The method requires only measurements of the water table, fluxes such as stream baseflows, bottom of the system, and hydraulic conductivity to delineate approximate recharge/discharge zones and to estimate rates. The method uses interpolation of the water table surface, recharge/discharge mapping, pattern recognition, and a parameter estimation model. The surface interpolator used is based on the theory of radial basis functions with thin-plate splines. The recharge/discharge mapping is based on a mass-balance calculation performed using MODFLOW. The results of the recharge/discharge mapping are critically dependent on the accuracy of the water table interpolation and the accuracy and number of water table measurements. The recharge pattern recognition is performed with the help of a graphical user interface (GUI) program based on several algorithms used in image processing. Pattern recognition is needed to identify the recharge/discharge zonations and zone the results of the mapping method. The parameter estimation program UCODE calculates the parameter values that provide a best fit between simulated heads and flows and calibration head-and-flow targets. A model of the Buena Vista Ground Water Basin in the Central Sand Plains of Wisconsin is used to demonstrate the procedure.
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Su, G; Guldbrandtsen, B; Gregersen, V R
2010-01-01
were available. In the analysis, all SNP were fitted simultaneously as random effects in a Bayesian variable selection model, which allows heterogeneous variances for different SNP markers. The response variables were the official EBV. Direct GEBV were calculated as the sum of individual SNP effects...... for all 18 index traits. Reliability of GEBV was assessed by squared correlation between GEBV and conventional EBV (r2GEBV, EBV), and expected reliability was obtained from prediction error variance using a 5-fold cross validation. Squared correlations between GEBV and published EBV (without any...... that genomic selection can greatly improve the accuracy of preselection for young bulls compared with traditional selection based on parent average information....
Traveling-wave tube reliability estimates, life tests, and space flight experience
Lalli, V. R.; Speck, C. E.
1977-01-01
Infant mortality, useful life, and wearout phase of twt life are considered. The performance of existing developmental tubes, flight experience, and sequential hardware testing are evaluated. The reliability history of twt's in space applications is documented by considering: (1) the generic parts of the tube in light of the manner in which their design and operation affect the ultimate reliability of the device, (2) the flight experience of medium power tubes, and (3) the available life test data for existing space-qualified twt's in addition to those of high power devices.
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...
Wang, Haoran; Wang, Huai; Zhu, Guorong;
2016-01-01
Electrolytic Capacitors (E-Cap) as the passive energy buffer in single-phase converter are often assumed to be the reliability bottleneck of power electronic system. Various Active Power Decoupling (APD) methods have been proposed intending to improve the reliability of the DC-link E......-Caps qualitatively, making great effort to diverting the instantaneous pulsation power into extra reliable storage components. However, it is still an open question, which method is the most cost-effective one for a specific application with a given lifetime requirement. In this paper, two of the representative APD...... methods and the classical passive DC-link design method are evaluated from the reliability and cost perspective. The reliability-oriented design procedure is applied to size the chip area of active switching devices and the passive components to fulfill a specific lifetime target. Component cost models...
Ren, Yihui; Eubank, Stephen; Nath, Madhurima
2016-10-01
Network reliability is the probability that a dynamical system composed of discrete elements interacting on a network will be found in a configuration that satisfies a particular property. We introduce a reliability property, Ising feasibility, for which the network reliability is the Ising model's partition function. As shown by Moore and Shannon, the network reliability can be separated into two factors: structural, solely determined by the network topology, and dynamical, determined by the underlying dynamics. In this case, the structural factor is known as the joint density of states. Using methods developed to approximate the structural factor for other reliability properties, we simulate the joint density of states, yielding an approximation for the partition function. Based on a detailed examination of why naïve Monte Carlo sampling gives a poor approximation, we introduce a parallel scheme for estimating the joint density of states using a Markov-chain Monte Carlo method with a spin-exchange random walk. This parallel scheme makes simulating the Ising model in the presence of an external field practical on small computer clusters for networks with arbitrary topology with ˜106 energy levels and more than 10308 microstates.
Zeeshan Ali Siddiqui
2016-01-01
Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.
The effect of estimation and production procedures on running economy in recreational athletes.
Faulkner, James A; Woolley, Brandon P; Lambrick, Danielle M
2012-11-01
Running economy is an important component in any endurance event. However, the influence of effort perception on running economy has yet to be examined. The purpose of this study was to assess the oxygen cost of running (running economy) at identical ratings of perceived exertion (RPE) during estimation (EST) and production (PR) procedures, during treadmill exercise. Fourteen, well-trained male participants actively produced (self-regulated) a range of submaximal exercise intensities equating to RPE values 9, 11, 13, 15 and 17, and passively estimated their perception of exertion during an incremental graded-exercise test (GXT). Allometric scaling was used to ensure an appropriate comparison in running economy between conditions. The present study demonstrated that the overall running economy between conditions was statistically similar (p>0.05). A significant interaction was however identified between Conditions and RPE (peconomy significantly improved during PR but remained fairly consistent during EST between moderate and high perceptions of exertion (RPE 11-17). Despite similarities in running economy between conditions, physiological (oxygen uptake, heart rate, minute ventilation and blood lactate) and physical (running velocity) markers of exercise intensity were significantly higher during EST for equivalent perceptions of exertion (all peconomy and enhance athletic performance when compared to identical perceptions of exertion elicited during active production procedures. Athletes, coaches and physical trainers should consider the perceptual procedures utilised during training to ensure that an athlete trains at the most effective training intensity. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Seaver, D.A.; Stillwell, W.G.
1983-03-01
This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.
R&D program benefits estimation: DOE Office of Electricity Delivery and Energy Reliability
None, None
2006-12-04
The overall mission of the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE) is to lead national efforts to modernize the electric grid, enhance the security and reliability of the energy infrastructure, and facilitate recovery from disruptions to the energy supply. In support of this mission, OE conducts a portfolio of research and development (R&D) activities to advance technologies to enhance electric power delivery. Multiple benefits are anticipated to result from the deployment of these technologies, including higher quality and more reliable power, energy savings, and lower cost electricity. In addition, OE engages State and local government decision-makers and the private sector to address issues related to the reliability and security of the grid, including responding to national emergencies that affect energy delivery. The OE R&D activities are comprised of four R&D lines: High Temperature Superconductivity (HTS), Visualization and Controls (V&C), Energy Storage and Power Electronics (ES&PE), and Distributed Systems Integration (DSI).
Wilson, Celia M.
2010-01-01
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Wilson, Celia M.
2010-01-01
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Clarifying the Blurred Image: Estimating the Inter-Rater Reliability of Performance Assessments.
Moore, Alan D.; Young, Suzanne
As schools move toward performance assessment, there is increasing discussion of using these assessments for accountability purposes. When used for making decisions, performance assessments must meet high standards of validity and reliability. One major source of unreliability in performance assessments is interrater disagreement. In this paper,…
A Akbari Sari; Babashahy, S; A Olyaeimanesh; A Rashidian
2012-01-01
Background: The aim of this study was to estimate the frequency and rate of the first 50 common types of invasive procedures in Iran. Methods: Data about the number of all invasive procedures and each type of procedure that were conducted in Iran in 2010 were collected using the main insurance organizations database. These numbers were sorted in an excel database, and the first 50 invasive procedures with the most common frequency were selected. Then according to the population covered by the...
Tokunaga, Yoshitaka; Kubota, Kunihiro
This paper presents estimation techniques of machine parameters for two windings power transformer using design procedure of winding structure. Especially, it is very difficult to obtain machine parameters for transformers in customers' facilities. Using estimation techniques, machine parameters could be calculated from the only nameplate data of these transformers. Subsequently, EMTP-ATP simulation of the inrush current was carried out using machine parameters estimated by design procedure of winding structure and simulation results were reproduced measured waveforms.
Contemporary Treatment of Reliability and Validity in Educational Assessment
Dimitrov, Dimiter M.
2010-01-01
The focus of this presidential address is on the contemporary treatment of reliability and validity in educational assessment. Highlights on reliability are provided under the classical true-score model using tools from latent trait modeling to clarify important assumptions and procedures for reliability estimation. In addition to reliability,…
Contemporary Treatment of Reliability and Validity in Educational Assessment
Dimitrov, Dimiter M.
2010-01-01
The focus of this presidential address is on the contemporary treatment of reliability and validity in educational assessment. Highlights on reliability are provided under the classical true-score model using tools from latent trait modeling to clarify important assumptions and procedures for reliability estimation. In addition to reliability,…
Estimation of neutron spectrum in the low-level gamma spectroscopy system using unfolding procedure
Knežević, D.; Jovančević, N.; Krmar, M.
2016-03-01
The radiation resulting from neutron interactions with Ge nuclei in active volume of HPGe detectors is one of the main concerns in low-level gamma spectroscopy measurements [1,2]. It is usually not possible to measure directly spectrum of neutrons which strike detector. This paper explore the possibility of estimation of neutron spectrum using measured activities of certain Ge(n,γ) and Ge(n,n') reactions (obtained from low-level gamma measurements), available ENDF cross section data and unfolding procedures. In this work HPGe detector with passive shield made from commercial low background lead was used for the measurement. The most important objective of this study was to reconstruct muon induced neutron spectrum created in the shield of the HPGe detector. MAXED [3] and GRAVEL [4] algorithms for neutron spectra unfolding were used. The results of those two algorithms were compared and we analyzed the sensitivity of the unfolding procedure to the various input parameters.
Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela
2013-05-01
Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process.
Rosa Ana Salas
2013-11-01
Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.
Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F
2012-07-07
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Barth, Johannes; Neyton, Lionel; Métais, Pierre; Panisset, Jean-Claude; Baverel, Laurent; Walch, Gilles; Lafosse, Laurent
2017-08-01
The aim of the study was to develop a computed tomography (CT)-based measurement protocol for coracoid graft (CG) placement in both axial and sagittal planes after a Latarjet procedure and to test its intraobserver and interobserver reliability. Fifteen postoperative CT scans were included to assess the intraobserver and interobserver reproducibility of a standardized protocol among 3 senior and 3 junior shoulder surgeons. The evaluation sequence included CG positioning, its contact area with the glenoid, and the angle of its screws in the axial plane. The percentage of CG positioned under the glenoid equator was also analyzed in the sagittal plane. The intraobserver and interobserver agreement was measured by the intraclass correlation coefficient (ICC), and the values were interpreted according to the Landis and Koch classification. The ICC was substantial to almost perfect for intraobserver agreement and fair to almost perfect for interobserver agreement in measuring the angle of screws in the axial plane. The intraobserver agreement was slight to almost perfect and the interobserver agreement slight to substantial regarding CG positioning in the same plane. The intraobserver agreement and interobserver agreement were both fair to almost perfect concerning the contact area. The ICC was moderate to almost perfect for intraobserver agreement and slight to almost perfect for interobserver agreement in analyzing the percentage of CG under the glenoid equator. The variability of ICC values observed implies that caution should be taken in interpreting results regarding the CG position on 2-dimensional CT scans. This discrepancy is mainly explained by the difficulty in orienting the glenoid in the sagittal plane before any other parameter is measured. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Equation reliability of soil ingestion estimates in mass-balance soil ingestion studies.
Stanek Iii, Edward J; Xu, Bo; Calabrese, Edward J
2012-03-01
Exposure to chemicals from ingestion of contaminated soil may be an important pathway with potential health consequences for children. A key parameter used in assessing this exposure is the quantity of soil ingested, with estimates based on four short longitudinal mass-balance soil ingestion studies among children. The estimates use trace elements in the soil with low bioavailability that are minimally present in food. Soil ingestion corresponds to the excess trace element amounts excreted, after subtracting trace element amounts ingested from food and medications, expressed as an equivalent quantity of soil. The short duration of mass-balance studies, different concentrations of trace elements in food and soil, and potential for trace elements to be ingested from other nonsoil, nonfood sources contribute to variability and bias in the estimates. We develop a stochastic model for a soil ingestion estimator based on a trace element that accounts for critical features of the mass-balance equation. Using results from four mass-balance soil ingestion studies, we estimate the accuracy of soil ingestion estimators for different trace elements, and identify subjects where the difference between Al and Si estimates is larger (>3 RMSE) than expected. Such large differences occur in fewer than 12% of subjects in each of the four studies. We recommend the use of such criteria to flag and exclude subjects from soil ingestion analyses. © 2011 Society for Risk Analysis.
Matheus Henrique Nunes
Full Text Available Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.
Nunes, Matheus Henrique; Görgens, Eric Bastos
2016-01-01
Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.
Williamson, Laura D; Brookes, Kate L; Scott, Beth E; Graham, Isla M; Bradbury, Gareth; Hammond, Philip S; Thompson, Paul M; McPherson, Jana
2016-01-01
...‐based visual surveys. Surveys of cetaceans using acoustic loggers or digital cameras provide alternative methods to estimate relative density that have the potential to reduce cost and provide a verifiable record of all detections...
Tinker, M. Timothy; Doak, Daniel F.; Estes, James A.; Hatfield, Brian B.; Staedler, Michelle M.; Bodkin, James L
2006-01-01
Reliable information on historical and current population dynamics is central to understanding patterns of growth and decline in animal populations. We developed a maximum likelihood-based analysis to estimate spatial and temporal trends in age/sex-specific survival rates for the threatened southern sea otter (Enhydra lutris nereis), using annual population censuses and the age structure of salvaged carcass collections. We evaluated a wide range of possible spatial and temporal effects and used model averaging to incorporate model uncertainty into the resulting estimates of key vital rates and their variances. We compared these results to current demographic parameters estimated in a telemetry-based study conducted between 2001 and 2004. These results show that survival has decreased substantially from the early 1990s to the present and is generally lowest in the north-central portion of the population's range. The greatest temporal decrease in survival was for adult females, and variation in the survival of this age/sex class is primarily responsible for regulating population growth and driving population trends. Our results can be used to focus future research on southern sea otters by highlighting the life history stages and mortality factors most relevant to conservation. More broadly, we have illustrated how the powerful and relatively straightforward tools of information-theoretic-based model fitting can be used to sort through and parameterize quite complex demographic modeling frameworks. ?? 2006 by the Ecological Society of America.
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
Aghamousa, Amir
2014-01-01
The observable time delays between the multiple images of strong lensing systems with time variable sources can provide us with some valuable information to probe the expansion history of the Universe. Estimation of these time delays can be very challenging due to complexities of the observed data where there are seasonal gaps, various noises and systematics such as unknown microlensing effects. In this paper we introduce a novel approach to estimate the time delays for strong lensing systems implementing various statistical methods of data analysis including the method of smoothing and cross-correlation. The method we introduce in this paper has been recently used in TDC0 and TDC1 Strong Lens Time Delay Challenges and has shown its power in reliable and precise estimation of time delays dealing with data with different complexities.
Monica C Junkes
Full Text Available The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version.After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30 were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes.The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect. In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593 and income (rs = 0.327 and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis.The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.
Thie, Johnson; Sriram, Prema; Klistorner, Alexander; Graham, Stuart L
2012-01-01
This paper describes a method to reliably estimate latency of multifocal visual evoked potential (mfVEP) and a classifier to automatically separate reliable mfVEP traces from noisy traces. We also investigated which mfVEP peaks have reproducible latency across recording sessions. The proposed method performs cross-correlation between mfVEP traces and second order Gaussian wavelet kernels and measures the timing of the resulting peaks. These peak times offset by the wavelet kernel's peak time represents the mfVEP latency. The classifier algorithm performs an exhaustive series of leave-one-out classifications to find the champion mfVEP features which are most frequently selected to infer reliable traces from noisy traces. Monopolar mfVEP recording was performed on 10 subjects using the Accumap1™ system. Pattern-reversal protocol was used with 24 sectors and eccentricity upto 33°. A bipolar channel was recorded at midline with electrodes placed above and below the inion. The largest mfVEP peak and the immediate peak prior had the smallest latency variability across recording sessions, about ±2ms. The optimal classifier selected three champion features, namely, signal-to-noise ratio, the signal's peak magnitude response from 5 to 15Hz and the peak-to-peak amplitude of the trace between 70 and 250 ms. The classifier algorithm can separate reliable and noisy traces with a high success rate, typically 93%. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Vano, E; Sanchez, R M; Fernandez, J M
2015-07-01
The purpose of this article is to estimate lens doses using over apron active personal dosemeters in interventional catheterisation laboratories (cardiology IC, neuroradiology IN and radiology IR) and to investigate correlations between occupational lens doses and patient doses. Active electronic personal dosemeters placed over the lead apron were used on a sample of 204 IC procedures, 274 IN and 220 IR (all performed at the same university hospital). Patient dose values (kerma area product) were also recorded to evaluate correlations with occupational doses. Operators used the ceiling-suspended screen in most cases. The median and third quartile values of equivalent dose Hp(10) per procedure measured over the apron for IC, IN and IR resulted, respectively, in 21/67, 19/44 and 24/54 µSv. Patient dose values (median/third quartile) were 75/128, 83/176 and 61/159 Gy cm(2), respectively. The median ratios for dosemeters worn over the apron by operators (protected by the ceiling-suspended screen) and patient doses were 0.36; 0.21 and 0.46 µSv Gy(-1) cm(-2), respectively. With the conservative approach used (lens doses estimated from the over apron chest dosemeter) we came to the conclusion that more than 800 procedures y(-1) and per operator were necessary to reach the new lens dose limit for the three interventional specialties. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Palma, M N N; Rocha, G C; Valadares Filho, S C; Detmann, E
2015-11-01
Rigorously standardized laboratory protocols are essential for meaningful comparison of data from multiple sites. Considering that interactions of minerals with organic matrices may vary depending on the material nature, there could be peculiar demands for each material with respect to digestion procedure. Acid digestion procedures were evaluated using different nitric to perchloric acid ratios and one- or two-step digestion to estimate the concentration of calcium, phosphorus, magnesium, and zinc in samples of carcass, bone, excreta, concentrate, forage, and feces. Six procedures were evaluated: ratio of nitric to perchloric acid at 2:1, 3:1, and 4:1 v/v in a one- or two-step digestion. There were no direct or interaction effects (p>0.01) of nitric to perchloric acid ratio or number of digestion steps on magnesium and zinc contents. Calcium and phosphorus contents presented a significant (p0.01) calcium or phosphorus contents in carcass, excreta, concentrate, forage, and feces. Number of digestion steps did not affect mineral content (p>0.01). Estimated concentration of calcium, phosphorus, magnesium, and zinc in carcass, excreta, concentrated, forage, and feces samples can be performed using digestion solution of nitric to perchloric acid 4:1 v/v in a one-step digestion. However, samples of bones demand a stronger digestion solution to analyze the mineral contents, which is represented by an increased proportion of perchloric acid, being recommended a digestion solution of nitric to perchloric acid 2:1 v/v in a one-step digestion.
韩明
2013-01-01
作者以前提出了一种新的参数估计方法——E-Bayes估计法,对二项分布的可靠度,给出了E-Bayes估计的定义、E-Bayes估计和多层Bayes估计公式,但没有给出E-Bayes估计的性质.该文给出了二项分布可靠度F-Bayes估计的性质.%Previously, the author introduces a new parameter estimation method-E-Bayesian estimation method, to estimate the reliability derived form Binomial distribution, the definition of E-Bayesian estimation of the reliability is provided; moreover, formulas of E-Bayesian estimation and hierarchical Bayesian estimation for the reliability are also provided, but the author did not provide propertiy of E-Bayesian estimation. This paper, properties of E-Bayesian estimation are provided.
A new lifetime estimation model for a quicker LED reliability prediction
Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.
2014-09-01
LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.
Do tests devised to detect recent HIV-1 infection provide reliable estimates of incidence in Africa?
Sakarovitch, Charlotte; Rouet, Francois; Murphy, Gary; Minga, Albert K; Alioum, Ahmadou; Dabis, Francois; Costagliola, Dominique; Salamon, Roger; Parry, John V; Barin, Francis
2007-05-01
The objective of this study was to assess the performance of 4 biologic tests designed to detect recent HIV-1 infections in estimating incidence in West Africa (BED, Vironostika, Avidity, and IDE-V3). These tests were assessed on a panel of 135 samples from 79 HIV-1-positive regular blood donors from Abidjan, Côte d'Ivoire, whose date of seroconversion was known (Agence Nationale de Recherches sur le SIDA et les Hépatites Virales 1220 cohort). The 135 samples included 26 from recently infected patients (180 days), and 15 from patients with clinical AIDS. The performance of each assay in estimating HIV incidence was assessed through simulations. The modified commercial assays gave the best results for sensitivity (100% for both), and the IDE-V3 technique gave the best result for specificity (96.3%). In a context like Abidjan, with a 10% HIV-1 prevalence associated with a 1% annual incidence, the estimated test-specific annual incidence rates would be 1.2% (IDE-V3), 5.5% (Vironostika), 6.2% (BED), and 11.2% (Avidity). Most of the specimens falsely classified as incident cases were from patients infected for >180 days but <1 year. The authors conclude that none of the 4 methods could currently be used to estimate HIV-1 incidence routinely in Côte d'Ivoire but that further adaptations might enhance their accuracy.
Limits to the reliability of size-based fishing status estimation for data-poor stocks
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
2015-01-01
in more than 60% of the cases, and almost always correctly assess whether a stock is subject to overfishing. Adding information about age, i.e., assuming that growth rate and asymptotic size are known, does not improve the estimation. Only knowledge of the ratio between mortality and growth led...
Reliability-based weighting of visual and vestibular cues in displacement estimation
Horst, A.C. ter; Koppen, M.G.M.; Selen, L.P.J.; Medendorp, W.P.
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimat
Lauritsen, Jakob; Gundgaard, Maria G; Mortensen, Mette S
2014-01-01
Estimates of glomerular filtration rate (eGFR) are widely used when administering nephrotoxic chemotherapy. No studies performed in oncology patients have shown whether eGFR can safely substitute a measured GFR (mGFR) based on a marker method. We aimed to assess the validity of four major formula...
Limits to the reliability of size-based fishing status estimation for data-poor stocks
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
2015-01-01
in more than 60% of the cases, and almost always correctly assess whether a stock is subject to overfishing. Adding information about age, i.e., assuming that growth rate and asymptotic size are known, does not improve the estimation. Only knowledge of the ratio between mortality and growth led...
Portas Ferradas, B. C.; Chapel Gomez, M. L.; Jimenez Alarcon, J. I.
2011-07-01
This paper present the result of the estimated doses in the eyes of workers exposed for radiology procedures and interventional cardiology from measurements made with thermoluminescent dosimeter placed near the lens.
Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).
Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre
2007-05-01
Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.
Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure
Robert G. MacCann
2004-03-01
Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.
A comparison of the parameter estimating procedures for the Michaelis-Menten model.
Tseng, S J; Hsu, J P
1990-08-23
The performance of four parameter estimating procedures for the estimation of the adjustable parameters in the Michaelis-Menten model, the maximum initial rate Vmax, and the Michaelis-Menten constant Km, including Lineweaver & Burk transformation (L-B), Eadie & Hofstee transformation (E-H), Eisenthal & Cornish-Bowden transformation (ECB), and Hsu & Tseng random search (H-T) is compared. The analysis of the simulated data reveals the followings: (i) Vmax can be estimated more precisely than Km. (ii) The sum of square errors, from the smallest to the largest, follows the sequence H-T, E-H, ECB, L-B. (iii) Considering the sum of square errors, relative error, and computing time, the overall performance follows the sequence H-T, L-B, E-H, ECB, from the best to the worst. (iv) The performance of E-H and ECB are on the same level. (v) L-B and E-H are appropriate for pricesly measured data. H-T should be adopted for data whose error level are high. (vi) Increasing the number of data points has a positive effect on the performance of H-T, and a negative effect on the performance of L-B, E-H, and ECB.
Sathyachandran, S. K.; Roy, D. P.; Boschetti, L.
2014-12-01
The Fire Radiative Power (FRP) [MW] is a measure of the rate of biomass combustion and can be retrieved from ground based and satellite observations using middle infra-red measurements. The temporal integral of FRP is the Fire Radiative Energy (FRE) [MJ] and is related linearly to the total biomass consumption and so pyrogenic emissions. Satellite derived biomass consumption and emissions estimates have been derived conventionally by computing the summed total FRP, or the average FRP (arithmetic average of FRP retrievals), over spatial geographic grids for fixed time periods. These two methods are prone to estimation bias, especially under irregular sampling conditions such as provided by polar-orbiting satellites, because the FRP can vary rapidly in space and time as a function of the fire behavior. Linear temporal integration of FRP taking into account when the FRP values were observed and using the trapezoidal rule for numerical integration has been suggested as an alternate FRE estimation method. In this study FRP data measured rapidly with a dual-band radiometer over eight prescribed fires are used to compute eight FRE values using the sum, mean and trapezoidal estimation approaches under a variety of simulated irregular sampling conditions. The estimated values are compared to biomass consumed measurements for each of the eight fires to provide insights into which method provides more accurate and precise biomass consumption estimates. The three methods are also applied to continental MODIS FRP data to study their differences using polar orbiting satellite data. The research findings indicate that trapezoidal FRP numerical integration provides the most reliable estimator.
Alessandro Barbiero
2014-01-01
Full Text Available In many statistical applications, it is often necessary to obtain an interval estimate for an unknown proportion or probability or, more generally, for a parameter whose natural space is the unit interval. The customary approximate two-sided confidence interval for such a parameter, based on some version of the central limit theorem, is known to be unsatisfactory when its true value is close to zero or one or when the sample size is small. A possible way to tackle this issue is the transformation of the data through a proper function that is able to make the approximation to the normal distribution less coarse. In this paper, we study the application of several of these transformations to the context of the estimation of the reliability parameter for stress-strength models, with a special focus on Poisson distribution. From this work, some practical hints emerge on which transformation may more efficiently improve standard confidence intervals in which scenarios.
A simplified procedure for mass and stiffness estimation of existing structures
Nigro, Antonella; Ditommaso, Rocco; Carlo Ponzo, Felice; Salvatore Nigro, Domenico
2016-04-01
This work focuses the attention on a parametric method for mass and stiffness identification of framed structures, based on frequencies evaluation. The assessment of real structures is greatly affected by the consistency of information retrieved on materials and on the influence of both non-structural components and soil. One of the most important matter is the correct definition of the distribution, both in plan and in elevation, of mass and stiffness: depending on concentrated and distributed loads, the presence of infill panels and the distribution of structural elements. In this study modal identification is performed under several mass-modified conditions and structural parameters consistent with the identified modal parameters are determined. Modal parameter identification of a structure before and after the introduction of additional masses is conducted. By considering the relationship between the additional masses and modal properties before and after the mass modification, structural parameters of a damped system, i.e. mass, stiffness and damping coefficient are inversely estimated from these modal parameters variations. The accuracy of the method can be improved by using various mass-modified conditions. The proposed simplified procedure has been tested on both numerical and experimental models by means linear numerical analyses and shaking table tests performed on scaled structures at the Seismic Laboratory of the University of Basilicata (SISLAB). Results confirm the effectiveness of the proposed procedure to estimate masses and stiffness of existing real structures with a maximum error equal to 10%, under the worst conditions. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2015 - RS4 ''Seismic observatory of structures and health monitoring''.
Wouter D Weeda
Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.
ErpICASSO: a tool for reliability estimates of independent components in EEG event-related analysis.
Artoni, Fiorenzo; Gemignani, Angelo; Sebastiani, Laura; Bedini, Remo; Landi, Alberto; Menicucci, Danilo
2012-01-01
Independent component analysis and blind source separation methods are steadily gaining popularity for separating individual brain and non-brain source signals mixed by volume conduction in electroencephalographic data. Despite the advancements on these techniques, determining the number of embedded sources and their reliability are still open issues. In particular to date no method takes into account trial-to-trial variability in order to provide a reliability measure of independent components extracted in Event Related Potentials (ERPs) studies. In this work we present ErpICASSO, a new method which modifies a data-driven approach named ICASSO for the analysis of trials (epochs). In addition to ICASSO the method enables the user to estimate the number of embedded sources, and provides a quality index of each extracted ERP component by combining trial-to-trial bootstrapping and CCA projection. We applied ErpICASSO on ERPs recorded from 14 subjects presented with unpleasant and neutral pictures. We separated potentials putatively related to different systems and identified the four primary ERP independent sources. Standing on the confidence interval estimated by ErpICASSO, we were able to compare the components between neutral and unpleasant conditions. ErpICASSO yielded encouraging results, thus providing the scientific community with a useful tool for ICA signal processing whenever dealing with trials recorded in different conditions.
Mohammed, Rezwana Begum; Kalyan, V Siva; Tircouveluri, Saritha; Vegesna, Goutham Chakravarthy; Chirla, Anil; Varma, D Maruthi
2014-07-01
Determining the age of a person in the absence of documentary evidence of birth is essential for legal and medico-legal purpose. Fishman method of skeletal maturation is widely used for this purpose; however, the reliability of this method for people with all geographic locations is not well-established. In this study, we assessed various stages of carpal and metacarpal bone maturation and tested the reliability of Fishman method of skeletal maturation to estimate the age in South Indian population. We also evaluated the correlation between the chronological age (CA) and predicted age based on the Fishman method of skeletal maturation. Digital right hand-wrist radiographs of 330 individuals aged 9-20 years were obtained and the skeletal maturity stage for each subject was determined using Fishman method. The skeletal maturation indicator scores were obtained and analyzed with reference to CA and sex. Data was analyzed using the SPSS software package (version 12, SPSS Inc., Chicago, IL, USA). The study subjects had a tendency toward late maturation with the mean skeletal age (SA) estimated being significantly lowers (P maturity stages. Nevertheless, significant correlation was observed in this study between SA and CA for males (r = 0.82) and females (r = 0.85). Interestingly, female subjects were observed to be advanced in SA compared with males. Fishman method of skeletal maturation can be used as an alternative tool for the assessment of mean age of an individual of unknown CA in South Indian children.
Kyeremateng, Samuel O; Pudlas, Marieke; Woehrle, Gerd H
2014-09-01
A novel empirical analytical approach for estimating solubility of crystalline drugs in polymers has been developed. The approach utilizes a combination of differential scanning calorimetry measurements and a reliable mathematical algorithm to construct complete solubility curve of a drug in polymer. Compared with existing methods, this novel approach reduces the required experimentation time and amount of material by approximately 80%. The predictive power and relevance of such solubility curves in development of amorphous solid dispersion (ASD) formulations are shown by applications to a number of hot-melt extrudate formulations of ibuprofen and naproxen in Soluplus. On the basis of the temperature-drug load diagrams using the solubility curves and the glass transition temperatures, physical stability of the extrudate formulations was predicted and checked by placing the formulations on real-time stability studies. An analysis of the stability samples with microscopy, thermal, and imaging techniques confirmed the predicted physical stability of the formulations. In conclusion, this study presents a fast and reliable approach for estimating solubility of crystalline drugs in polymer matrixes. This powerful approach can be applied by formulation scientists as an early and convenient tool in designing ASD formulations for maximum drug load and physical stability.
Bustos, Cesar; Sandeen, Ben; Chennakesavalu, Shriram; Littenberg, Tyson; Farr, Ben; Kalogera, Vassiliki
2016-01-01
Gravitational Waves (GWs) were predicted by Einstein's Theory of General Relativity as ripples in space-time that propagate outward from a source. Strong GW sources consist of compact binary systems such as Binary Neutron Stars (BNS) or Binary Black Holes (BBHs) that experience orbital shrinkage (inspiral) and eventual merger. Indirect evidence for the existence of GWs has been obtained through radio pulsar studies in BNS systems. A study of BBHs and other compact objects has limitations in the electromagnetic spectrum, therefore direct detections of GWs will open a new window into their nature. The effort targeting direct GWs detection is anchored on the development of a detector known as Advanced LIGO (Laser Interferometer Gravitational Wave Observation). Although detecting GW sources represents an anticipated breakthrough in physics, making GW astrophysics a reality critically relies on our ability to determine and measure the physical parameters associated with GW sources. We use Markov Chain Monte Carlo (MCMC) simulations on high-performance computing clusters for parameter estimation on high dimensional spaces (GW sources - 15 parameters). The quality of GW parameter estimation greatly depends on having the best possible knowledge of the expected waveform. Unfortunately, BBH GW production is very complex and our best waveforms are not valid across the full parameter space. With large-scale simulations we examine quantitatively the limitations of these waveforms in terms of extracting the astrophysical properties of BBH GW sources. We find that current waveforms are inadequate for BBH of unequal masses and demonstrate that improved waveforms are critically needed.
Nguyen, Thi-Hau; Ranwez, Vincent; Berry, Vincent; Scornavacca, Celine
2013-01-01
The genome content of extant species is derived from that of ancestral genomes, distorted by evolutionary events such as gene duplications, transfers and losses. Reconciliation methods aim at recovering such events and at localizing them in the species history, by comparing gene family trees to species trees. These methods play an important role in studying genome evolution as well as in inferring orthology relationships. A major issue with reconciliation methods is that the reliability of predicted evolutionary events may be questioned for various reasons: Firstly, there may be multiple equally optimal reconciliations for a given species tree–gene tree pair. Secondly, reconciliation methods can be misled by inaccurate gene or species trees. Thirdly, predicted events may fluctuate with method parameters such as the cost or rate of elementary events. For all of these reasons, confidence values for predicted evolutionary events are sorely needed. It was recently suggested that the frequency of each event in the set of all optimal reconciliations could be used as a support measure. We put this proposition to the test here and also consider a variant where the support measure is obtained by additionally accounting for suboptimal reconciliations. Experiments on simulated data show the relevance of event supports computed by both methods, while resorting to suboptimal sampling was shown to be more effective. Unfortunately, we also show that, unlike the majority-rule consensus tree for phylogenies, there is no guarantee that a single reconciliation can contain all events having above 50% support. In this paper, we detail how to rely on the reconciliation graph to efficiently identify the median reconciliation. Such median reconciliation can be found in polynomial time within the potentially exponential set of most parsimonious reconciliations. PMID:24124449
Online Reliable Peak Charge/Discharge Power Estimation of Series-Connected Lithium-Ion Battery Packs
Bo Jiang
2017-03-01
Full Text Available The accurate peak power estimation of a battery pack is essential to the power-train control of electric vehicles (EVs. It helps to evaluate the maximum charge and discharge capability of the battery system, and thus to optimally control the power-train system to meet the requirement of acceleration, gradient climbing and regenerative braking while achieving a high energy efficiency. A novel online peak power estimation method for series-connected lithium-ion battery packs is proposed, which considers the influence of cell difference on the peak power of the battery packs. A new parameter identification algorithm based on adaptive ratio vectors is designed to online identify the parameters of each individual cell in a series-connected battery pack. The ratio vectors reflecting cell difference are deduced strictly based on the analysis of battery characteristics. Based on the online parameter identification, the peak power estimation considering cell difference is further developed. Some validation experiments in different battery aging conditions and with different current profiles have been implemented to verify the proposed method. The results indicate that the ratio vector-based identification algorithm can achieve the same accuracy as the repetitive RLS (recursive least squares based identification while evidently reducing the computation cost, and the proposed peak power estimation method is more effective and reliable for series-connected battery packs due to the consideration of cell difference.
Draaijer, S.; Klinkenberg, S.
2015-01-01
A procedure to construct valid and fair fixed-length tests with randomly drawn items from an item bank is described. The procedure provides guidelines for the set-up of a typical achievement test with regard to the number of items in the bank and the number of items for each position in a test. Furt
How reliable is estimation of glomerular filtration rate at diagnosis of type 2 diabetes?
Chudleigh, Richard A; Dunseath, Gareth; Evans, William; Harvey, John N; Evans, Philip; Ollerton, Richard; Owens, David R
2007-02-01
The Cockcroft-Gault (CG) and Modification of Diet in Renal Disease (MDRD) equations previously have been recommended to estimate glomerular filtration rate (GFR). We compared both estimates with true GFR, measured by the isotopic (51)Cr-EDTA method, in newly diagnosed, treatment-naïve subjects with type 2 diabetes. A total of 292 mainly normoalbuminuric (241 of 292) subjects were recruited. Subjects were classified as having mild renal impairment (group 1, GFR /=90 ml/min per 1.73 m(2)). Estimated GFR (eGFR) was calculated by the CG and MDRD equations. Blood samples drawn at 44, 120, 180, and 240 min after administration of 1 MBq of (51)Cr-EDTA were used to measure isotopic GFR (iGFR). For subjects in group 1, mean (+/-SD) iGFR was 83.8 +/- 4.3 ml/min per 1.73 m(2). eGFR was 78.0 +/- 16.5 or 73.7 +/- 12.0 ml/min per 1.73 m(2) using CG and MDRD equations, respectively. Ninety-five percent CIs for method bias were -11.1 to -0.6 using CG and -14.4 to -7.0 using MDRD. Ninety-five percent limits of agreement (mean bias +/- 2 SD) were -37.2 to 25.6 and -33.1 to 11.7, respectively. In group 2, iGFR was 119.4 +/- 20.3 ml/min per 1.73 m(2). eGFR was 104.4 +/- 26.3 or 92.3 +/- 18.7 ml/min per 1.73 m(2) using CG and MDRD equations, respectively. Ninety-five percent CIs for method bias were -17.4 to -12.5 using CG and -29.1 to -25.1 using MDRD. Ninety-five percent limits of agreement were -54.4 to 24.4 and -59.5 to 5.3, respectively. In newly diagnosed type 2 diabetic patients, particularly those with a GFR >/=90 ml/min per 1.73 m(2), both CG and MDRD equations significantly underestimate iGFR. This highlights a limitation in the use of eGFR in the majority of diabetic subjects outside the setting of chronic kidney disease.
CO{sub 2}-recycling by plants: how reliable is the carbon isotope estimation?
Siegwolf, R.T.W.; Saurer, M. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Koerner, C. [Basel Univ., Basel (Switzerland)
1997-06-01
In the study of plant carbon relations, the amount of the respiratory losses from the soil was estimated, determining the gradient of the stable isotope {sup 13}C with increasing plant canopy height. According to the literature 8-26% of the CO{sub 2} released in the forests by soil and plant respiratory processes are reassimilated (recycled) by photosynthesis during the day. Our own measurements however, which we conducted in grass land showed diverging results from no indicating of carbon recycling, to a considerable {delta}{sup 13}C gradient suggesting a high carbon recycling rate. The role of other factors, such as air humidity and irradiation which influence the {delta}{sup 13}C in a canopy as well, are discussed. (author) 3 figs., 4 refs.
Nakayachi, Kazuya; Watabe, Motoki
2005-08-01
This research examined the effects of providing a monitoring and self-sanctioning system, called "hostage posting" in economics, on the improvement of trustworthiness. We conducted two questionnaire-type experiments to compare the trust-improving effects among the three conditions, (a) a voluntary provision of a monitoring and self-sanction system by the manager, (b) an imposed provision, and (c) an achievement of satisfactory management without any types of provisions. Total of 561 undergraduate students participated in the experiments. Results revealed that perceived integrity and competence were improved to almost the same level in both conditions (a) and (c), whereas these were not improved in condition (b). Consistent with our previous research, these results showed that the voluntary hostage posting improved trustworthiness level as well as a good performance did. The estimation of necessity of the system, however, was not different across these conditions. The implications for management practice and directions for future research were discussed.
Are there reliable methods to estimate the nuclear orientation of Seyfert galaxies?
Marin, F
2016-01-01
Orientation, together with accretion and evolution, is one of the three main drivers in the Grand Unification of Active Galactic Nuclei (AGN). Being unresolved, determining the true inclination of those powerful sources is always difficult and indirect, yet it remains a vital clue to apprehend the numerous, panchromatic and complex spectroscopic features we detect. There are only a hundred inclinations derived so far; in this context, can we be sure that we measure the true orientation of AGN? To answer this question, four methods to estimate the nuclear inclination of AGN are investigated and compared to inclination-dependent observables (hydrogen column density, Balmer linewidth, optical polarization, and flux ratios within the IR and relative to X-rays). Among these orientation indicators, the method developed by Fisher, Crenshaw, Kraemer et al., mapping and modeling the radial velocities of the [O iii] emission region in AGN, is the most successful. The [O iii]-mapping technique shows highly statistically...
On the reliability of direct Rayleigh-wave estimation from multicomponent cross-correlations
Xu, Zongbo; Mikesell, T. Dylan
2017-09-01
Seismic interferometry is routinely used to image and characterize underground geology. The vertical component cross-correlations (CZZ) are often analysed in this process; although one can also use radial component and multicomponent cross-correlations (CRR and CZR, respectively), which have been shown to provide a more accurate Rayleigh-wave Green's function than CZZ when sources are unevenly distributed. In this letter, we identify the relationship between the multicomponent cross-correlations (CZR and CRR) and the Rayleigh-wave Green's functions to show why CZR and CRR are less sensitive than CZZ to non-stationary phase source energy. We demonstrate the robustness of CRR with a synthetic seismic noise data example. These results provide a compelling reason as to why CRR should be used to estimate the dispersive characteristics of the direct Rayleigh wave with seismic interferometry when the signal-to-noise ratio is high.
Klarskov, Carina Kirstine; Klarskov, Mikkel Buster; Hasseldam, Henrik
2016-01-01
Background: Stroke is the second most common cause of death worldwide. Only one treatment for acute ischemic stroke is currently available, thrombolysis with rt-PA, but it is limited in its use. Many efforts have been invested in order to find additive treatments, without success.A multitude...... of reasons for the translational problems from mouse experimental stroke to clinical trials probably exists, including infarct size estimations around the peak time of edema formation. Furthermore, edema is a more prominent feature of stroke in mice than in humans, because of the tendency to produce larger...... infarcts with more substantial edema. Purpose: This paper will give an overview of previous studies of experimental mouse stroke, and correlate survival time to peak time of edema formation. Furthermore, investigations of whether the included studies corrected the infarct measurements for edema...
A Framework for Estimating Piping Reliability Subject to Corrosion Under Insulation
Mokhtar Ainul Akmar
2014-07-01
Full Text Available Corrosion under insulation (CUI is one of the serious damage mechanisms experienced by insulated piping systems. Optimizing the inspection schedule for insulated piping systems is a major challenge faced by inspection and corrosion engineers since CUI takes place beneath the insulation which makes the detection and prediction of the damage mechanism harder. In recent years, risk-based inspection (RBI approach has been adopted to optimize CUI inspection plan. RBI approach is based on risk, a product of the likelihood of a failure and the consequence of such failure. The likelihood analysis usually follows either the qualitative or the semi-qualitative methods, thus precluding it to be used for quantitative risk assessment. This paper presents a framework for estimating quantitatively the likelihood of failure due to CUI based on the type of data available.
A new procedure of modal parameter estimation for high-speed digital image correlation
Huňady, Róbert; Hagara, Martin
2017-09-01
The paper deals with the use of 3D digital image correlation in determining modal parameters of mechanical systems. It is a non-contact optical method, which for the measurement of full-field spatial displacements and strains of bodies uses precise digital cameras with high image resolution. Most often this method is utilized for testing of components or determination of material properties of various specimens. In the case of using high-speed cameras for measurement, the correlation system is capable of capturing various dynamic behaviors, including vibration. This enables the potential use of the mentioned method in experimental modal analysis. For that purpose, the authors proposed a measuring chain for the correlation system Q-450 and developed a software application called DICMAN 3D, which allows the direct use of this system in the area of modal testing. The created application provides the post-processing of measured data and the estimation of modal parameters. It has its own graphical user interface, in which several algorithms for the determination of natural frequencies, mode shapes and damping of particular modes of vibration are implemented. The paper describes the basic principle of the new estimation procedure which is crucial in the light of post-processing. Since the FRF matrix resulting from the measurement is usually relatively large, the estimation of modal parameters directly from the FRF matrix may be time-consuming and may occupy a large part of computer memory. The procedure implemented in DICMAN 3D provides a significant reduction in memory requirements and computational time while achieving a high accuracy of modal parameters. Its computational efficiency is particularly evident when the FRF matrix consists of thousands of measurement DOFs. The functionality of the created software application is presented on a practical example in which the modal parameters of a composite plate excited by an impact hammer were determined. For the
Yiannikopoulou, I.; Philippopoulos, K.; Deligiorgi, D.
2012-04-01
The vertical thermal structure of the atmosphere is defined by a combination of dynamic and radiation transfer processes and plays an important role in describing the meteorological conditions at local scales. The scope of this work is to develop and quantify the predictive ability of a hybrid dynamic-statistical downscaling procedure to estimate the vertical profile of ambient temperature at finer spatial scales. The study focuses on the warm period of the year (June - August) and the method is applied to an urban coastal site (Hellinikon), located in eastern Mediterranean. The two-step methodology initially involves the dynamic downscaling of coarse resolution climate data via the RegCM4.0 regional climate model and subsequently the statistical downscaling of the modeled outputs by developing and training site-specific artificial neural networks (ANN). The 2.5ox2.5o gridded NCEP-DOE Reanalysis 2 dataset is used as initial and boundary conditions for the dynamic downscaling element of the methodology, which enhances the regional representivity of the dataset to 20km and provides modeled fields in 18 vertical levels. The regional climate modeling results are compared versus the upper-air Hellinikon radiosonde observations and the mean absolute error (MAE) is calculated between the four grid point values nearest to the station and the ambient temperature at the standard and significant pressure levels. The statistical downscaling element of the methodology consists of an ensemble of ANN models, one for each pressure level, which are trained separately and employ the regional scale RegCM4.0 output. The ANN models are theoretically capable of estimating any measurable input-output function to any desired degree of accuracy. In this study they are used as non-linear function approximators for identifying the relationship between a number of predictor variables and the ambient temperature at the various vertical levels. An insight of the statistically derived input
Shojaei Saadi, Habib A; Vigneault, Christian; Sargolzaei, Mehdi; Gagné, Dominic; Fournier, Éric; de Montera, Béatrice; Chesnais, Jacques; Blondin, Patrick; Robert, Claude
2014-10-12
Genome-wide profiling of single-nucleotide polymorphisms is receiving increasing attention as a method of pre-implantation genetic diagnosis in humans and of commercial genotyping of pre-transfer embryos in cattle. However, the very small quantity of genomic DNA in biopsy material from early embryos poses daunting technical challenges. A reliable whole-genome amplification (WGA) procedure would greatly facilitate the procedure. Several PCR-based and non-PCR based WGA technologies, namely multiple displacement amplification, quasi-random primed library synthesis followed by PCR, ligation-mediated PCR, and single-primer isothermal amplification were tested in combination with different DNA extractions protocols for various quantities of genomic DNA inputs. The efficiency of each method was evaluated by comparing the genotypes obtained from 15 cultured cells (representative of an embryonic biopsy) to unamplified reference gDNA. The gDNA input, gDNA extraction method and amplification technology were all found to be critical for successful genome-wide genotyping. The selected WGA platform was then tested on embryo biopsies (n = 226), comparing their results to that of biopsies collected after birth. Although WGA inevitably leads to a random loss of information and to the introduction of erroneous genotypes, following genomic imputation the resulting genetic index of both sources of DNA were highly correlated (r = 0.99, PDNA in sufficient quantities for successful genome-wide genotyping starting from an early embryo biopsy. However, imputation from parental and population genotypes is a requirement for completing and correcting genotypic data. Judicious selection of the WGA platform, careful handling of the samples and genomic imputation together, make it possible to perform extremely reliable genomic evaluations for pre-transfer embryos.
Tokunaga, Yoshitaka
This paper presents estimation techniques of machine parameters for power transformer using design procedure of transformer and genetic algorithm with real coding. Especially, it is very difficult to obtain machine parameters for transformers in customers' facilities. Using estimation techniques, machine parameters could be calculated from the only nameplate data of these transformers. Subsequently, EMTP-ATP simulation of the inrush current was carried out using machine parameters estimated by techniques developed in this study and simulation results were reproduced measured waveforms.
Reliable estimation of adsorption isotherm parameters using adequate pore size distribution
Husseinzadeh, Danial; Shahsavand, Akbar [Ferdowsi University of Mashhad, Mashhad (Iran, Islamic Republic of)
2015-05-15
The equilibrium adsorption isotherm has a crucial effect on various characteristics of the solid adsorbent (e.g., pore volume, bulk density, surface area, pore geometry). A historical paradox exists in conventional estimation of adsorption isotherm parameters. Traditionally, the total amount of adsorb material (total adsorption isotherm) has been considered equivalent to the local adsorption isotherm. This assumption is only valid when the corresponding pore size or energy distribution (PSD or ED) of the porous adsorbent can be successfully represented with the Dirac delta function. In practice, the actual PSD (or ED) is far from such assumption, and the traditional method for prediction of local adsorption isotherm parameters leads to serious errors. Up to now, the powerful combination of inverse theory and linear regularization technique has drastically failed when used for extraction of PSD from real adsorption data. For this reason, all previous researches used synthetic data because they were not able to extract proper PSD from the measured total adsorption isotherm with unrealistic parameters of local adsorption isotherm. We propose a novel approach that can successfully provide the correct values of local adsorption isotherm parameters without any a priori and unrealistic assumptions. Two distinct methods are suggested and several illustrative (synthetic and real experimental) examples are presented to clearly demonstrate the effectiveness of the newly proposed methods on computing the correct values of local adsorption isotherm parameters. The so-called Iterative and Optima methods' impressive performances on extraction of correct PSD are validated using several experimental data sets.
Estimation of AM fungal colonization - Comparability and reliability of classical methods.
Füzy, Anna; Biró, Ibolya; Kovács, Ramóna; Takács, Tünde
2015-12-01
The characterization of mycorrhizal status in hosts can be a good indicator of symbiotic associations in inoculation experiments or in ecological research. The most common microscopic-based observation methods, such as (i) the gridline intersect method, (ii) the magnified intersections method and (iii) the five-class system of Trouvelot were tested to find the most simple, easily executable, effective and objective ones and their appropriate parameters for characterization of mycorrhizal status. In a pot experiment, white clover (Trifolium repens L.) host plant was inoculated with 6 (BEG144; syn. Rhizophagus intradices) in pumice substrate to monitor the AMF colonization properties during host growth. Eleven (seven classical and four new) colonization parameters were estimated by three researchers in twelve sampling times during plant growth. Variations among methods, observers, parallels, or individual plants were determined and analysed to select the most appropriate parameters and sampling times for monitoring. The comparability of the parameters of the three methods was also tested. As a result of the experiment classical parameters were selected for hyphal colonization: colonization frequency in the first stage or colonization density in the later period, and arbuscular richness of roots. A new parameter was recommended to determine vesicule and spore content of colonized roots at later stages of symbiosis.
Bayesian Regression and Neuro-Fuzzy Methods Reliability Assessment for Estimating Streamflow
Yaseen A. Hamaamin
2016-07-01
Full Text Available Accurate and efficient estimation of streamflow in a watershed’s tributaries is prerequisite parameter for viable water resources management. This study couples process-driven and data-driven methods of streamflow forecasting as a more efficient and cost-effective approach to water resources planning and management. Two data-driven methods, Bayesian regression and adaptive neuro-fuzzy inference system (ANFIS, were tested separately as a faster alternative to a calibrated and validated Soil and Water Assessment Tool (SWAT model to predict streamflow in the Saginaw River Watershed of Michigan. For the data-driven modeling process, four structures were assumed and tested: general, temporal, spatial, and spatiotemporal. Results showed that both Bayesian regression and ANFIS can replicate global (watershed and local (subbasin results similar to a calibrated SWAT model. At the global level, Bayesian regression and ANFIS model performance were satisfactory based on Nash-Sutcliffe efficiencies of 0.99 and 0.97, respectively. At the subbasin level, Bayesian regression and ANFIS models were satisfactory for 155 and 151 subbasins out of 155 subbasins, respectively. Overall, the most accurate method was a spatiotemporal Bayesian regression model that outperformed other models at global and local scales. However, all ANFIS models performed satisfactory at both scales.
Shiraishi, Hiroshi
2010-01-01
.... Based on this we construct an estimator of the lower tail of the estimation error. Moreover, we introduce the Estimation Error Efficient Portfolio which considers the estimation error as the portfolio risk...
Pearson, E.; Smith, M. W.; Klaar, M. J.; Brown, L. E.
2017-09-01
High resolution topographic surveys such as those provided by Structure-from-Motion (SfM) contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-metre scale topographic variability (or 'surface roughness') to sediment grain size by deriving empirical relationships between the two. In fluvial applications, such relationships permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing improved data to drive three-dimensional hydraulic models, allowing rapid geomorphic monitoring of sub-reach river restoration projects, and enabling more robust characterisation of riverbed habitats. However, comparison of previously published roughness-grain-size relationships shows substantial variability between field sites. Using a combination of over 300 laboratory and field-based SfM surveys, we demonstrate the influence of inherent survey error, irregularity of natural gravels, particle shape, grain packing structure, sorting, and form roughness on roughness-grain-size relationships. Roughness analysis from SfM datasets can accurately predict the diameter of smooth hemispheres, though natural, irregular gravels result in a higher roughness value for a given diameter and different grain shapes yield different relationships. A suite of empirical relationships is presented as a decision tree which improves predictions of grain size. By accounting for differences in patch facies, large improvements in D50 prediction are possible. SfM is capable of providing accurate grain size estimates, although further refinement is needed for poorly sorted gravel patches, for which c-axis percentiles are better predicted than b-axis percentiles.
Scale Reliability Evaluation with Heterogeneous Populations
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…
Lifetime Reliability Assessment of Concrete Slab Bridges
Thoft-Christensen, Palle
A procedure for lifetime assesment of the reliability of short concrete slab bridges is presented in the paper. Corrosion of the reinforcement is the deterioration mechanism used for estimating the reliability profiles for such bridges. The importance of using sensitivity measures is stressed. Fi...
Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov [Department of Health and Human Services, Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States); Sak, Mark A.; Bey-Knight, Lisa [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 (United States); Duric, Nebojsa; Littrup, Peter [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 and Delphinus Medical Technologies, 46701 Commerce Center Drive, Plymouth, Michigan 48170 (United States); Ali, Haythem; Vallieres, Patricia [Henry Ford Health System, 2799 W Grand Boulevard, Detroit, Michigan 48202 (United States); Sherman, Mark E. [Division of Cancer Prevention, National Cancer Institute, Department of Health and Human Services, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States)
2015-10-15
Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.
Kapil Yadav
2015-01-01
Full Text Available Background: Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Materials and Methods: Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD. The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. Results: The intra- and interassay imprecision for 10 parts per million (ppm, 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. Conclusions: The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.
Sara Mortaz Hejri
2013-01-01
Full Text Available Background: One of the methods used for standard setting is the borderline regression method (BRM. This study aims to assess the reliability of BRM when the pass-fail standard in an objective structured clinical examination (OSCE was calculated by averaging the BRM standards obtained for each station separately. Materials and Methods: In nine stations of the OSCE with direct observation the examiners gave each student a checklist score and a global score. Using a linear regression model for each station, we calculated the checklist score cut-off on the regression equation for the global scale cut-off set at 2. The OSCE pass-fail standard was defined as the average of all station′s standard. To determine the reliability, the root mean square error (RMSE was calculated. The R2 coefficient and the inter-grade discrimination were calculated to assess the quality of OSCE. Results: The mean total test score was 60.78. The OSCE pass-fail standard and its RMSE were 47.37 and 0.55, respectively. The R2 coefficients ranged from 0.44 to 0.79. The inter-grade discrimination score varied greatly among stations. Conclusion: The RMSE of the standard was very small indicating that BRM is a reliable method of setting standard for OSCE, which has the advantage of providing data for quality assurance.
A Akbari Sari
2012-10-01
Full Text Available Background: The aim of this study was to estimate the frequency and rate of the first 50 common types of invasive procedures in Iran.Methods: Data about the number of all invasive procedures and each type of procedure that were conducted in Iran in 2010 were collected using the main insurance organizations database. These numbers were sorted in an excel database, and the first 50 invasive procedures with the most common frequency were selected. Then according to thepopulation covered by the given insurance organizations, and based on the total population of Iran in 2011, we estimated the number and rate of each invasive procedure for the selective procedures.Results: It was estimated that a total of 769,500 (1,026 per 100,000 population natural vaginal delivery (NVD was performed in Iran in 2011, followed by 416,790 cataract operation (556 per 100,000 population, 401,436 cesarean delivery (535 per 100,000 population, 260,514 coronary angiography disease (347 per 100,000 population, 181,836 varicocele (242 per 100,000 population, 144,918 appendectomy (193 per 100,000 population, 134,766 rhinoplasty(180 per 100,000 population and 105,912 pilonidal cyst (141 per 100,000 population.Conclusion: The result could be used to identify and select the most frequent invasive procedures in Iran, to calculate the average cost of each procedure and to use these costs to estimate hospital budget and improve policy-making.
Reliability of subjective wound assessment
M.C.T. Bloemen; P.P.M. van Zuijlen; E. Middelkoop
2011-01-01
Introduction: Assessment of the take of split-skin graft and the rate of epithelialisation are important parameters in burn surgery. Such parameters are normally estimated by the clinician in a bedside procedure. This study investigates whether this subjective assessment is reliable for graft take a
Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.
2014-01-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the compete
Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.
2014-01-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the
Levillain, Joseph; Thongo M'Bou, Armel; Deleporte, Philippe; Saint-André, Laurent; Jourdan, Christophe
2011-07-01
Despite their importance for plant production, estimations of below-ground biomass and its distribution in the soil are still difficult and time consuming, and no single reliable methodology is available for different root types. To identify the best method for root biomass estimations, four different methods, with labour requirements, were tested at the same location. The four methods, applied in a 6-year-old Eucalyptus plantation in Congo, were based on different soil sampling volumes: auger (8 cm in diameter), monolith (25 × 25 cm quadrate), half Voronoi trench (1·5 m(3)) and a full Voronoi trench (3 m(3)), chosen as the reference method. With the reference method (0-1m deep), fine-root biomass (FRB, diameter biomass (MRB diameter 2-10 mm) at 2·0 t ha(-1), coarse-root biomass (CRB, diameter >10 mm) at 5·6 t ha(-1) and stump biomass at 6·8 t ha(-1). Total below-ground biomass was estimated at 16·2 t ha(-1) (root : shoot ratio equal to 0·23) for this 800 tree ha(-1) eucalypt plantation density. The density of FRB was very high (0·56 t ha(-1)) in the top soil horizon (0-3 cm layer) and decreased greatly (0·3 t ha(-1)) with depth (50-100 cm). Without labour requirement considerations, no significant differences were found between the four methods for FRB and MRB; however, CRB was better estimated by the half and full Voronoi trenches. When labour requirements were considered, the most effective method was auger coring for FRB, whereas the half and full Voronoi trenches were the most appropriate methods for MRB and CRB, respectively. As CRB combined with stumps amounted to 78 % of total below-ground biomass, a full Voronoi trench is strongly recommended when estimating total standing root biomass. Conversely, for FRB estimation, auger coring is recommended with a design pattern accounting for the spatial variability of fine-root distribution.
Dietz, Kelly R. [University of Minnesota, Department of Radiology, Minneapolis, MN (United States); Zhang, Lei [University of Minnesota, Biostatistical Design and Analysis Center, Minneapolis, MN (United States); Seidel, Frank G. [Lucile Packard Children' s Hospital, Department of Radiology, Stanford, CA (United States)
2015-08-15
Prior to digital radiography it was possible for a radiologist to easily estimate the size of a patient on an analog film. Because variable magnification may be applied at the time of processing an image, it is now more difficult to visually estimate an infant's size on the monitor. Since gestational age and weight significantly impact the differential diagnosis of neonatal diseases and determine the expected size of kidneys or appearance of the brain by MRI or US, this information is useful to a pediatric radiologist. Although this information may be present in the electronic medical record, it is frequently not readily available to the pediatric radiologist at the time of image interpretation. To determine if there was a correlation between gestational age and weight of a premature infant with their transverse chest diameter (rib to rib) on admission chest radiographs. This retrospective study was approved by the institutional review board, which waived informed consent. The maximum transverse chest diameter outer rib to outer rib was measured on admission portable chest radiographs of 464 patients admitted to the neonatal intensive care unit (NICU) during the 2010 calendar year. Regression analysis was used to investigate the association between chest diameter and gestational age/birth weight. Quadratic term of chest diameter was used in the regression model. Chest diameter was statistically significantly associated with both gestational age (P < 0.0001) and birth weight (P < 0.0001). An infant's gestational age and birth weight can be reliably estimated by comparing a simple measurement of the transverse chest diameter on digital chest radiograph with the tables and graphs in our study. (orig.)
Lawes, R.
2016-12-01
The Australian grain growing region is vast and occupies where some 25 million tonnes of wheat is produced from latitudes -27 to -42, where soils, crops and climates vary considerably. Predicting the area of individual crops is time consuming and currently conducted by survey, while yield estimates are derived from these areas and from information about grain receivables with little pre-harvest information available to industry. The existing approach fails to provide reliable, timely, small scale information about production. Similarly, previous attempts to predict yield using satellite derived information rely on information collected using the existing systems to calibrate models. We have developed a crop productivity and yield model - called C-Store Crop - that uses remotely sensed vegetation indices, along with site based rainfall, radiation and temperature information. Model calibration using 3000 points derived from farmer supplied yield maps for wheat, barley, canola and chickpea showed strong relationships (>70%) between modelled plant mass and observed crop yield at the paddock scale. C-Store Crop is being applied at 250m and 25m grid resolution. Farmer supplied yield data was also used to train a combination of Radar and Landsat images collected whilst the crop is growing to discriminate between crop types. Landsat information alone was unable to discriminate legume and cereal crops. Problems such as cloud prevented accessing appropriate scenes. Inclusion of Radar information reduced errors of commission and omission. By combining the C-Store Crop model with remote estimates of crop type, we anticipate predicting crop type and crop yield with uncertainty estimates across the Australian continent.
L. Altarejos-García
2012-07-01
Full Text Available This paper addresses the use of reliability techniques such as Rosenblueth's Point-Estimate Method (PEM as a practical alternative to more precise Monte Carlo approaches to get estimates of the mean and variance of uncertain flood parameters water depth and velocity. These parameters define the flood severity, which is a concept used for decision-making in the context of flood risk assessment. The method proposed is particularly useful when the degree of complexity of the hydraulic models makes Monte Carlo inapplicable in terms of computing time, but when a measure of the variability of these parameters is still needed. The capacity of PEM, which is a special case of numerical quadrature based on orthogonal polynomials, to evaluate the first two moments of performance functions such as the water depth and velocity is demonstrated in the case of a single river reach using a 1-D HEC-RAS model. It is shown that in some cases, using a simple variable transformation, statistical distributions of both water depth and velocity approximate the lognormal. As this distribution is fully defined by its mean and variance, PEM can be used to define the full probability distribution function of these flood parameters and so allowing for probability estimations of flood severity. Then, an application of the method to the same river reach using a 2-D Shallow Water Equations (SWE model is performed. Flood maps of mean and standard deviation of water depth and velocity are obtained, and uncertainty in the extension of flooded areas with different severity levels is assessed. It is recognized, though, that whenever application of Monte Carlo method is practically feasible, it is a preferred approach.
Sztulman, L; Szternberg, A; Grouteau, E; Claudet, I
2011-11-01
To analyze the accuracy of estimates made by medical staff and parents regarding fees for consultations and frequently prescribed medical exams. The questionnaire focused on the value in euros for the following: day and night consultation in the pediatric emergency department, blood and urine analysis, electrocardiogram, chest and abdominal x-ray, abdominal ultrasound, upper digestive endoscopy, CT scan, cerebral MRI (without anesthesia), an arm cast, and superficial wound repair. Medical staff belonged to different units of the childrens' hospital. The parents interviewed had consulted at the pediatric emergency unit. Neither of the two investigators was familiar with the fee structure. To avoid inducing a gradation in estimates, questions were asked with no pre-established order. To limit the possibility of participants discussing the questionnaire with their colleagues or searching for the real value, all medical staff members were assessed within a 48-h period. The responses of 185 medical employees (23 pediatricians, 28 interns, 81 nurses, 45 childcare assistants, seven nurse supervisors) and 187 parents were analyzed and compared. Less than 25% of the population gave an answer with an accepted error of ± 30%. Parents and hospital staff overestimated costs, parents and childcare assistants overestimated more than other medical employees. Radiological exams were the most overestimated procedures with the largest proportion of the average deviation from normal value: CT scan 850 ± 1100%, cerebral MRI 370 ± 590%, abdominal x-ray 240 ± 390%, and chest x-ray 190 ± 320%. Part of our societal culture and now a requirement, the right to healthcare has a cost. This cost is often overestimated by caregivers and the general population. Global understanding of the costs related to medical care requires educating the population and medical professionals. Medical staff should be informed of the real costs of treatment to enable them to manage unnecessary costs. There
An alternative procedure for estimating the population mean in simple random sampling
Housila P. Singh
2012-03-01
Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.
Basu, Asit P; Basu, Sujit K
1998-01-01
This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul
Paek, Insu; Cai, Li
2014-01-01
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Paek, Insu; Cai, Li
2014-01-01
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Vasconcelos, Ilka M; Brasil, Isabel Cristiane F; Oliveira, José Tadeu A; Campello, Cláudio C; Maia, Fernanda Maria M; Campello, Maria Verônica M; Farias, Davi F; Carvalho, Ana Fontenele U
2009-06-10
This study assessed whether chemical analyses are sufficient to guarantee the safety of heat processing of soybeans (SB) for human/animal consumption. The effects of extrusion and dry-toasting were analyzed upon seed composition and performance of broiler chicks. None of these induced appreciable changes in protein content and amino acid composition. Conversely, toasting reduced all antinutritional proteins by over 85%. Despite that, the animals fed on toasted SB demonstrated a low performance (feed efficiency 57.8 g/100 g). Extrusion gave place to higher contents of antinutrients, particularly of trypsin inhibitors (27.53 g/kg flour), but animal performance was significantly (p trials, extrusion appears to be the safest method. In conclusion, in order to evaluate the reliability of any processing method intended to improve nutritional value, the combination of chemical and animal studies is necessary.
Eça, L. [Instituto Superior Técnico, Department of Mechanical Engineering, Av. Rovisco Pais, 1049-001 Lisbon (Portugal); Hoekstra, M. [Maritime Research Institute Netherlands, PO Box 28 6700 AA, Wageningen (Netherlands)
2014-04-01
This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.
Poulet, Axel; Privat, Maud; Ponelle, Flora; Viala, Sandrine; Decousus, Stephanie; Perin, Axel; Lafarge, Laurence; Ollier, Marie; El Saghir, Nagi S; Uhrhammer, Nancy; Bignon, Yves-Jean; Bidet, Yannick
Screening for BRCA mutations in women with familial risk of breast or ovarian cancer is an ideal situation for high-throughput sequencing, providing large amounts of low cost data. However, 454, Roche, and Ion Torrent, Thermo Fisher, technologies produce homopolymer-associated indel errors, complicating their use in routine diagnostics. We developed software, named AGSA, which helps to detect false positive mutations in homopolymeric sequences. Seventy-two familial breast cancer cases were analysed in parallel by amplicon 454 pyrosequencing and Sanger dideoxy sequencing for genetic variations of the BRCA genes. All 565 variants detected by dideoxy sequencing were also detected by pyrosequencing. Furthermore, pyrosequencing detected 42 variants that were missed with Sanger technique. Six amplicons contained homopolymer tracts in the coding sequence that were systematically misread by the software supplied by Roche. Read data plotted as histograms by AGSA software aided the analysis considerably and allowed validation of the majority of homopolymers. As an optimisation, additional 250 patients were analysed using microfluidic amplification of regions of interest (Access Array Fluidigm) of the BRCA genes, followed by 454 sequencing and AGSA analysis. AGSA complements a complete line of high-throughput diagnostic sequence analysis, reducing time and costs while increasing reliability, notably for homopolymer tracts.
Axel Poulet
2016-01-01
Full Text Available Screening for BRCA mutations in women with familial risk of breast or ovarian cancer is an ideal situation for high-throughput sequencing, providing large amounts of low cost data. However, 454, Roche, and Ion Torrent, Thermo Fisher, technologies produce homopolymer-associated indel errors, complicating their use in routine diagnostics. We developed software, named AGSA, which helps to detect false positive mutations in homopolymeric sequences. Seventy-two familial breast cancer cases were analysed in parallel by amplicon 454 pyrosequencing and Sanger dideoxy sequencing for genetic variations of the BRCA genes. All 565 variants detected by dideoxy sequencing were also detected by pyrosequencing. Furthermore, pyrosequencing detected 42 variants that were missed with Sanger technique. Six amplicons contained homopolymer tracts in the coding sequence that were systematically misread by the software supplied by Roche. Read data plotted as histograms by AGSA software aided the analysis considerably and allowed validation of the majority of homopolymers. As an optimisation, additional 250 patients were analysed using microfluidic amplification of regions of interest (Access Array Fluidigm of the BRCA genes, followed by 454 sequencing and AGSA analysis. AGSA complements a complete line of high-throughput diagnostic sequence analysis, reducing time and costs while increasing reliability, notably for homopolymer tracts.
Guimaraes, Margarete Cristina; Silva, Teogenes Augusto da, E-mail: margaretecristinag@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Rosado, Paulo H.G. [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)
2016-07-01
Metrology laboratories are expected to provide X radiation beams that were established by international standardization organizations to perform calibration and testing of dosimeters. Reliable and traceable standard dosimeters should be used in the calibration procedure. The aim of this work was to study the reliability of the NE 2575 ionization chamber used as standard dosimeter for the air kerma calibration procedure adopted in the CDTN Calibration Laboratory. (author)
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Huang, Liping; Crino, Michelle; Wu, Jason HY; Woodward, Mark; Land, Mary-Anne; McLean, Rachael; Webster, Jacqui; Enkhtungalag, Batsaikhan; Nowson, Caryl A; Elliott, Paul; Cogswell, Mary; Toft, Ulla; MILL, Jose G.; Furlanetto,Tania W.; Ilich, Jasminka Z.
2016-01-01
Background Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. Objective The aim of this study is to identify a reliable method for estimating mean population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effect...
Al-Musawi Safaa Ismael
2016-01-01
Full Text Available The calculating of reached ageing based on the history of loading according of International Electrotechnical Commission standard algorithm is the first task. In order to verify the obtained results, measurements of polymerization index were made on 28 paper samples taken directly from low voltage terminals (winding ends and bus connections of the transformer under test rated 380 MVA, 2´15,75 kV/420 kV. The complete procedure of paper sample locations and taking off is described, thereby providing a manner of how this should be done, determined by specific conditions of the transformer under test. Furthermore, the determination of limit viscosity and using its relationship function with polymerization index are explained together. Comparison is made with those of liquid chromatography of oil. The results of particle sort and size analysis are shown. Finally, an estimation of the transformer life remainder is made, which is of paramount importance when defining the steps that have to be made either in revitalization process or in transformer replacement planning.
Boughebri, Omar; Maqdes, Ali; Moraiti, Constantina; Dib, Choukry; Leclère, Franck Marie; Valenti, Philippe
2015-05-01
The Instability Severity Index Score (ISIS) includes preoperative clinical and radiological risk factors to select patients who can benefit from an arthroscopic Bankart procedure with a low rate of recurrence. Patients who underwent an arthroscopic Bankart for anterior shoulder instability with an ISIS lower than or equal to four were assessed after a minimum of 5-year follow-up. Forty-five shoulders were assessed at a mean of 79 months (range 60-118 months). Average age was 29.4 years (range 17-58 years) at the time of surgery. Postoperative functions were assessed by the Walch and Duplay and the Rowe scores for 26 patients; an adapted telephonic interview was performed for the 19 remaining patients who could not be reassessed clinically. A failure was defined by the recurrence of an anterior dislocation or subluxation. Patients were asked whether they were finally very satisfied, satisfied or unhappy. The mean Walch and Duplay score at last follow-up was 84.3 (range 35-100). The final result for these patients was excellent in 14 patients (53.8 %), good in seven cases (26.9 %), poor in three patients (11.5 %) and bad in two patients (7.7 %). The mean Rowe score was 82.6 (range 35-100). Thirty-nine patients (86.7 %) were subjectively very satisfied or satisfied, and six (13.3 %) were unhappy. Four patients (8.9 %) had a recurrence of frank dislocation with a mean delay of 34 months (range 12-72 months). Three of them had a Hill-Sachs lesion preoperatively. Two patients had a preoperative ISIS at 4 points and two patients at 3 points. The selection based on the ISIS allows a low rate of failure after an average term of 5 years. Lowering the limit for indication to 3 points allows to avoid the association between two major risk factors for recurrence, which are valued at 2 points. The existence of a Hill-Sachs lesion is a stronger indicator for the outcome of instability repair. Level IV, Retrospective Case Series, Treatment Study.
Hallam, J; Jones, J A
1983-01-01
Generalised Derived Limits (GDLs) for the discharge of radioactive material to the atmosphere are evaluated using parameter values to ensure that the exposure of the critical group is unlikely to be underestimated significantly. Where the discharge is greater than about 5% of the GDL, a more rigorous estimate of the derived limit may be warranted. This report describes a procedure for estimating site specific derived limits for discharges of radioactivity to the atmosphere taking into account the conditions of the release and the location and habits of the exposed population. A worksheet is provided to assist in carrying out the required calculations.
Wang, Wencui; Bottauscio, Oriano; Chiampi, Mario; Giordano, Domenico; Zilberti, Luca
2013-04-01
The paper proposes and discusses a boundary element procedure able to predict the distribution of the electric field induced in a human body exposed to a low-frequency magnetic field produced by unknown sources. As a first step, the magnetic field on the body surface is reconstructed starting from the magnetic field values detected on a closed surface enclosing the sources. Then, the solution of a boundary value problem provides the electric field distribution inside the human model. The procedure is tested and validated by considering different non-uniform magnetic field distributions generated by a Helmholtz coil system as well as different locations of the human model.
Li, Yan; Chen, Jianjun; Liu, Jipeng; Zhang, Lei; Wang, Weiguo; Zhang, Shaofeng
2013-09-01
The reliability of all-ceramic crowns is of concern to both patients and doctors. This study introduces a new methodology for quantifying the reliability of all-ceramic crowns based on the stress-strength interference theory and finite element models. The variables selected for the reliability analysis include the magnitude of the occlusal contact area, the occlusal load and the residual thermal stress. The calculated reliabilities of crowns under different loading conditions showed that too small occlusal contact areas or too great a difference of the thermal coefficient between veneer and core layer led to high failure possibilities. There results were consistent with many previous reports. Therefore, the methodology is shown to be a valuable method for analyzing the reliabilities of the restorations in the complicated oral environment.
Sampling procedure in a willow plantation for estimation of moisture content
Nielsen, Henrik Kofoed; Lærke, Poul Erik; Liu, Na
2015-01-01
Heating value and fuel quality of wood is closely connected to moisture content. In this work the variation of moisture content (MC) of short rotation coppice (SRC) willow shoots is described for five clones during one harvesting season. Subsequently an appropriate sampling procedure minimising...
Evaluation of procedures for estimation of the isosteric heat of adsorption in microporous materials
Krishna, R.
2014-01-01
The major objective of this communication is to evaluate procedures for estn. of the isosteric heat of adsorption, Qst, in microporous materials such as zeolites, metal org. frameworks (MOFs), and zeolitic imidazolate frameworks (ZIFs). For this purpose we have carefully analyzed published exptl.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Procedures and Methods for Estimating Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers B Appendix B to Part 76 Protection of... EMISSION REDUCTION PROGRAM Pt. 76, App. B Appendix B to Part 76—Procedures and Methods for Estimating Costs...
Yildirim, Isa; Ansari, Rashid; Samil Yetik, I.; Shahidi, Mahnaz
2010-03-01
Phosphorescence lifetime measurement based on a frequency domain approach is used to estimate oxygen tension in large retinal blood vessels. The classical least squares (LS) estimation was initially used to determine oxygen tension indirectly from intermediate variables. A spatial regularized least squares (RLS) method was later proposed to reduce the high variance of oxygen tension estimated by LS method. In this paper, we provide a solution using a modified RLS (MRLS) approach that utilizes prior knowledge about retinal vessels oxygenation based on expected oxygen tension values in retinal arteries and veins. The performance of MRLS method was evaluated in simulated and experimental data by determining the bias, variance, and mean absolute error (MAE) of oxygen tension measurements and comparing these parameters with those derived with the use of LS and RLS methods.
An empirical comparison of estimation procedures for the von Bertalanffy growth equation
Vaughan, D.S.; Kanciruk, P.
1982-01-01
One non-linear and two linear methods of fitting the von Bertalanffy growth equation to length-age data were compared using Monte Carlo simulations of fish populations while varying the standard error of the length, total sample size, sampling time interval, von Bertalanffy growth parameter, and annual adult survival. The iterative, non-linear method usually produced the most accurate and precise parameter estimates. The non-linear method also provided asymptotic confidence intervals about point estimates, placed fewest constraints on data collection, and was the easiest to use. It is suggested that traditional linear solutions to the von Bertalanffy growth equation be abandoned.
Urbanová, Petra; Ross, Ann H; Jurda, Mikoláš; Nogueira, Maria-Ines
2014-09-01
In the framework of forensic anthropology osteometric techniques are generally preferred over visual examinations due to a higher level of reproducibility and repeatability; qualities that are crucial within a legal context. The use of osteometric methods has been further reinforced by incorporating statistically-based algorithms and large reference samples in a variety of user-friendly software applications. However, the continued increase in admixture of human populations have made the use of osteometric methods for estimation of ancestry much more complex, which confounds one of major requirements of ancestry assessment - intra-population homogeneity. The present paper tests the accuracy of ancestry and sex assessment using four identification software tools, specifically FORDISC 2.0, FORDISC 3.1.293, COLIPR 1.5.2 and 3D-ID 1.0. Software accuracy was tested in a sample of 174 documented human crania of Brazilian origin composed of different ancestral groups (i.e., European Brazilians, Afro-Brazilians, and Japanese Brazilians and of admixed ancestry). The results show that regardless of the software algorithm employed and composition of the reference database, all methods were able to allocate approximately 50% of Brazilian specimens to an appropriate major reference group. Of the three ancestral groups, Afro-Brazilians were especially prone to misclassification. Japanese Brazilians, by contrast, were shown to be relatively easily recognizable as being of Asian descent but at the same time showed a strong affinity towards Hispanic crania, in particularly when the classification based on FDB was carried out in FORDISC. For crania of admixed origin all of the algorithms showed a considerable higher rate of inconsistency with a tendency for misclassification into Asian and American Hispanic groups. Sex assessments revealed an overall modest to poor reliability (60-71% of correctly classified specimens) using the tested software programs with unbalanced individual
Andreas Berner; Tariq Saleem Alharbi; Eric Carlstrm; Amir Khorram-Manesh
2015-01-01
Objective: To develop a validated and generalized high reliability organizations collaborative tool in order to conduct common assessments and information sharing of potential risks during mass-gatherings. Methods: The Swedish resource and risk estimation guide was used as foundation for the development of the generalized collaborative tool, by three different expert groups, and then analyzed. Analysis of inter-rater reliability was conducted through simulated cases that showed weighted and unweightκ-statistics. Results:The results revealed a mean of unweightκ-value from the three cases of 0.37 and a mean accuracy of 62%of the tool. Conclusions:The collaboration tool,“STREET”, showed acceptable reliability and validity to be used as a foundation for high reliability organization collaboration in a simulated environment. However, the lack of reliability in one of the cases highlights the challenges of creating measurable values from simulated cases. A study on real events can provide higher reliability but need, on the other hand, an already developed tool.
Camici, Stefania; Ciabatta, Luca; Massari, Christian; Brocca, Luca
2017-04-01
, TMPA 3B42-RT, CMORPH, PERSIANN and a new soil moisture-derived rainfall datasets obtained through the application of SM2RAIN algorithm (Brocca et al., 2014) to ASCAT (Advanced SCATterometer) soil moisture product are used in the analysis. The performances obtained with SRPs are compared with those obtained by using ground data during the 6-year period from 2010 to 2015. In addition, the performance obtained by an integration of the above mentioned SRPs is also investigated to see whether merged rainfall observations are able to improve flood simulation. Preliminary analysis were also carried out by using the IMERG early run product of GPM mission. The results highlight that SRPs should be used with caution for rainfall-runoff modelling in the Mediterranean region. Bias correction and model recalibration are necessary steps, even though not always sufficient to achieve satisfactory performances. Indeed, some of the products provide unreliable outcomes, mainly in smaller basins (<500 km2) that, however, represent the main target for flood modelling in the Mediterranean area. The better performances are obtained by integrating different SRPs, and particularly by merging TMPA 3B42-RT and SM2RAIN-ASCAT products. The promising results of the integrated product are expected to increase the confidence on the use of SRPs in hydrological modeling, even in challenging areas as the Mediterranean. REFERENCES Brocca, L., Ciabatta, L., Massari, C., Moramarco, T., Hahn, S., Hasenauer, S., Kidd, R., Dorigo, W., Wagner, W., Levizzani, V. (2014). Soil as a natural rain gauge: estimating global rainfall from satellite soil moisture data. Journal of Geophysical Research, 119(9), 5128-5141, doi:10.1002/2014JD021489. Masseroni, D., Cislaghi, A., Camici, S., Massari, C., Brocca, L. (2017). A reliable rainfall-runoff model for flood forecasting: review and application to a semiurbanized watershed at high flood risk in Italy. Hydrology Research, in press, doi:10.2166/nh.2016.037.
Innovative Laboratory Procedure to Estimate Thermophysical Parameters of Iso-exo Sleeves
Ignaszak Z.
2017-03-01
Full Text Available The paper is focused on properties testing of materials used in form of iso-exo sleeves for risers in ferrous alloys foundry. They are grainyfibrous materials, containing components which initiate and upkeep exothermic reaction. Thermo-physical parameters characterizing such sleeves are necessary also to fill in reliable databases for computer simulation of processes in the casting-mould layout. Studies with use of a liquid alloy, especially regarding different sleeves bring valuable results, but are also relatively expensive and require longer test preparation time. A simplified method of study in laboratory conditions was proposed, in a furnace heated to a temperature above ignition temperature of sleeve material (initiation of exothermic reaction. This method allows to determine the basic parameters of each new sleeve supplied to foundries and assures relatively quick evaluation of sleeve quality, by comparison with previous sleeve supplies or with sleeves brought by new providers.
Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia
2017-10-01
A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.
A Review of Sea State Estimation Procedures Based on Measured Vessel Responses
Nielsen, Ulrik Dam
2016-01-01
for shipboard SSE using measured vessel responses, resembling the concept of traditional wave rider buoys. Moreover, newly developed ideas for shipboard sea state estimation are introduced. The presented material is all based on the author’s personal experience, developed within extensive work on the subject......The operation of ships requires careful monitoring of therelated costs while, at the same time, ensuring a high level of safety. A ship’s performance with respect to safety and fuel efficiency may be compromised by the encountered waves. Consequently, it is important to estimate the surrounding...... buoys are not practical, as sea state information in real-time and at the actual geographical position of the ship is needed. On the other hand, the analogy between a ship and a floating buoy naturally suggests to using the ship itself as a wave buoy. This paper presents a status on techniques...
Taylor, Alexander J.; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L.; Six, Joseph S.; Pavlovskaya, Galina E.; Thomas, Neil R.; Auer, Dorothee P.; Meersmann, Thomas; Faas, Henryk M.
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental “calibration factor” to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments. PMID:27727294
A Quality Assurance Procedure and Evaluation of Rainfall Estimates for C-Band Polarimetric Radar
HU Zhiqun; LIU Liping; WANG Lirong
2012-01-01
A mobile C-band dual polarimetric weather radar J type (PCDJ),which adopts simultaneous transmission and simultaneous reception (STSR) of horizontally and vertically polarized signals,was first developed in China in 2008.It was deployed in the radar observation plan in the South China Heavy Rainfall Experiment (SCHeREX) in the summer of 2008 and 2009,as well as in Tropical Western Pacific Ocean Observation Experiments and Research on the Predictability of High Impact Weather Events from 2008 to 2010 in China (TWPOR).Using the observation data collected in these experiments,the radar systematic error and its sources were analyzed in depth.Meanwhile an algorithm that can smooth differential propagation phase (ΦDP) for estimating the high-resolution specific differential phase (KDp) was developed.After attenuation correction of reflectivity in horizontal polarization (ZH) and differential reflectivity (ZDR) of PCDJ radar by means of KDP,the data quality was improved significantly.Using quality-controlled radar data,quantitative rainfall estimation was performed,and the resutls were compared with rain-gauge measurements.A synthetic ZH /KDP-based method was analyzed.The results suggest that the synthetic method has the advantage over the traditional ZH-based method when the rain rate is ＞5 mm h-1.The more intensive the rain rates,the higher accuracy of the estimation.
Abarshi, M M; Mohammed, I U; Wasswa, P; Hillocks, R J; Holt, J; Legg, J P; Seal, S E; Maruthi, M N
2010-02-01
Sampling procedures and diagnostic protocols were optimized for accurate diagnosis of Cassava brown streak virus (CBSV) (genus Ipomovirus, family Potyviridae). A cetyl trimethyl ammonium bromide (CTAB) method was optimized for sample preparation from infected cassava plants and compared with the RNeasy plant mini kit (Qiagen) for sensitivity, reproducibility and costs. CBSV was detectable readily in total RNAs extracted using either method. The major difference between the two methods was in the cost of consumables, with the CTAB 10x cheaper (0.53 pounds sterling=US$0.80 per sample) than the RNeasy method (5.91 pounds sterling=US$8.86 per sample). A two-step RT-PCR (1.34 pounds sterling=US$2.01 per sample), although less sensitive, was at least 3-times cheaper than a one-step RT-PCR (4.48 pounds sterling=US$6.72). The two RT-PCR tests revealed consistently the presence of CBSV both in symptomatic and asymptomatic leaves and indicated that asymptomatic leaves can be used reliably for virus diagnosis. Depending on the accuracy required, sampling 100-400 plants per field is an appropriate recommendation for CBSD diagnosis, giving a 99.9% probability of detecting a disease incidence of 6.7-1.7%, respectively. CBSV was detected at 10(-4)-fold dilutions in composite sampling, indicating that the most efficient way to index many samples for CBSV will be to screen pooled samples. The diagnostic protocols described below are reliable and the most cost-effective methods available currently for detecting CBSV.
Application of virtual reality procedures in radiation protection and dose estimation for workers
Blunck, C.; Becker, F. [Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen (Germany). Inst. for Radiation Research; Urban, M. [WAK GmbH Wiederaufarbeitungsanlage Karlsruhe, Rueckbau- und Entsorgungs-GmbH, Eggenstein-Leopoldshafen (Germany)
2010-05-15
When people need to work in an environment where radiation fields are present, one has to think about the operation procedure in respect of radiation protection. This is valid for routine as well as for special work situations where radiation protection precautions are necessary. In order to give an advice about the safest way of operation and adequate shielding measures, it is necessary to analyse the radiation field and possible dose exposures at relevant positions in the working area. Since the field can be very inhomogeneous, extensive measurements could be needed for this purpose. In addition it is possible, that the field is not present before the time of work and a measurement could be troublesome or not possible at all. In this case, a simulation of the specific scenario could be an efficient way to analyse the radiation fields and determine possible exposures at different places. If an adequate phantom is used, it is even possible to determine personal doses like H{sub p}(10) or H{sub p}(0.07). However in most work situations, exposure is not a static scenario. The radiation field varies if the source or its surrounding objects change place. Furthermore people or parts of their bodies are usually in motion. Hence simulations of movements in inhomogeneous time and space variant radiation fields are desirable for dose assessment. In such a ''virtual reality'' working procedures could be trained or analysed without any exposure. We present an approach of simulating hand movements in inhomogeneous beta and photon radiation fields by means of an articulated hand phantom. As an example application, the hand phantom is used to simulate the handling of a Y-90 source. (orig.)
Hislop, Jane; Law, James; Rush, Robert; Grainger, Andrew; Bulley, Cathy; Reilly, John J; Mercer, Tom
2014-11-01
The purpose of this study was to determine the number of hours and days of accelerometry data necessary to provide a reliable estimate of habitual physical activity in pre-school children. The impact of a weekend day on reliability estimates was also determined and standard measurement days were defined for weekend and weekdays.Accelerometry data were collected from 112 children (60 males, 52 females, mean (SD) 3.7 (0.7)yr) over 7 d. The Spearman-Brown Prophecy formula (S-B prophecy formula) was used to predict the number of days and hours of data required to achieve an intraclass correlation coefficient (ICC) of 0.7. The impact of including a weekend day was evaluated by comparing the reliability coefficient (r) for any 4 d of data with data for 4 d including one weekend day.Our observations indicate that 3 d of accelerometry monitoring, regardless of whether it includes a weekend day, for at least 7 h d(-1) offers sufficient reliability to characterise total physical activity and sedentary behaviour of pre-school children. These findings offer an approach that addresses the underlying tension in epidemiologic surveillance studies between the need to maintain acceptable measurement rigour and retention of a representatively meaningful sample size.
Scinicariello, Franco; Portier, Christopher
2016-03-01
Non-cancer risk assessment traditionally assumes a threshold of effect, below which there is a negligible risk of an adverse effect. The Agency for Toxic Substances and Disease Registry derives health-based guidance values known as Minimal Risk Levels (MRLs) as estimates of the toxicity threshold for non-carcinogens. Although the definition of an MRL, as well as EPA reference dose values (RfD and RfC), is a level that corresponds to "negligible risk," they represent daily exposure doses or concentrations, not risks. We present a new approach to calculate the risk at exposure to specific doses for chemical mixtures, the assumption in this approach is to assign de minimis risk at the MRL. The assigned risk enables the estimation of parameters in an exponential model, providing a complete dose-response curve for each compound from the chosen point of departure to zero. We estimated parameters for 27 chemicals. The value of k, which determines the shape of the dose-response curve, was moderately insensitive to the choice of the risk at the MRL. The approach presented here allows for the calculation of a risk from a single substance or the combined risk from multiple chemical exposures in a community. The methodology is applicable from point of departure data derived from quantal data, such as data from benchmark dose analyses or from data that can be transformed into probabilities, such as lowest-observed-adverse-effect level. The individual risks are used to calculate risk ratios that can facilitate comparison and cost-benefit analyses of environmental contamination control strategies.
A simplified sampling procedure for the estimation of methane emission in rice fields.
Khokhar, Nadar Hussain; Park, Jae-Woo
2017-08-24
Manual closed chamber methods are widely used for CH4 measurement from rice paddies. Despite diurnal and seasonal variations in CH4 emissions, fixed sampling times, usually during the day, are used. Here, we monitored CH4 emission from rice paddies for one complete rice-growing season. Daytime CH4 emission increased from 0800 h, and maximal emission was observed at 1200 h. Daily averaged CH4 flux increased during plant growth or fertilizer application and decreased upon drainage of plants. CH4 measurement results were linearly interpolated and matched with the daily averaged CH4 emission calculated from the measured results. The time when daily averaged emission and the interpolated CH4 curve coincided during the daytime was largely invariant within each of the five distinctive periods. One-hourly sampling during each of these five periods was utilized to estimate the emission during each period, and we found that five one-hourly samples during the season accurately reflected the CH4 emission calculated based on all 136 hourly samples. This new sampling scheme is simple and more efficient than current sampling practices. Previously reported sampling schemes yielded estimates 9 to 32% higher than the measured CH4 emission, while our suggested scheme yielded an estimate that was only 5% different from that based on all 136-h samples. The sampling scheme proposed in this study can be used in rice paddy fields in Korea and extended worldwide to countries that use similar farming practices. This sampling scheme will help in producing more accurate global methane budget from rice paddy fields.
Kvidera, S K; Horst, E A; Abuajamieh, M; Mayorga, E J; Sanz Fernandez, M V; Baumgard, L H
2016-11-01
Infection and inflammation impede efficient animal productivity. The activated immune system ostensibly requires large amounts of energy and nutrients otherwise destined for synthesis of agriculturally relevant products. Accurately determining the immune system's in vivo energy needs is difficult, but a better understanding may facilitate developing nutritional strategies to maximize productivity. The study objective was to estimate immune system glucose requirements following an i.v. lipopolysaccharide (LPS) challenge. Holstein steers (148 ± 9 kg; = 15) were jugular catheterized bilaterally and assigned to 1 of 3 i.v.
Kjeldsen, Thomas Rodding; Rosbjerg, Dan
2002-01-01
A comparison of different methods for estimating T-year events is presented, all based on the Extreme Value Type I distribution. Series of annual maximum flood from ten gauging stations at the New Zealand South island have been used. Different methods of predicting the 100-year event...... considered applying either a log-linear relationship between at-site mean annual flood and catchment characteristics or a direct log-linear relationship between 100-year events and catchment characteristics. Comparison of the results shows that the existence of at-site measurements significantly diminishes...
Josheski, Dushko
2015-01-01
This file consists of estimations in the time series software RATS (Regression Analysis of Time Series). In this file program files and procedures for the estimations in the book by Brockwell and Davis (2 ed ) had been provided. The order of estimations is by the book chapters
Estimation of reliability based on zero-failure data%基于无失效数据的可靠度的估计
韩明
2002-01-01
When prior density function of R is in form of π(R |α)∞Rα and 0＜a＜2, the hierarchical Bayes estimation of the product reliability is given under the conditions of the binomial distribution With zero-failure data.%对二项分布无失效数据,在可靠度的先验密度为且时,给出了可靠度的多层Bayes估计.
Smith, Dianna M; Pearce, Jamie R; Harland, Kirk
2011-03-01
Models created to estimate neighbourhood level health outcomes and behaviours can be difficult to validate as prevalence is often unknown at the local level. This paper tests the reliability of a spatial microsimulation model, using a deterministic reweighting method, to predict smoking prevalence in small areas across New Zealand. The difference in the prevalence of smoking between those estimated by the model and those calculated from census data is less than 20% in 1745 out of 1760 areas. The accuracy of these results provides users with greater confidence to utilize similar approaches in countries where local-level smoking prevalence is unknown.
Alghali, R.; Kamaruddin, A. F.; Mokhtar, N.
2016-12-01
Introduction: The application of forensic odontology using teeth and bones becomes the most commonly used methods to determine age of unknown individuals. Objective: The aim of this study was to determine the reliability of Malay formula of Demirjian and Malay formula of Cameriere methods in determining the dental age that is closely matched with the chronological age of Malay children in Kepala Batas region. Methodology: This is a retrospective cross-sectional study. 126 good quality dental panoramic radiographs (DPT) of healthy Malay children aged 8-16 years (49 boys and 77 girls) were selected and measured. All radiographs were taken at Dental Specialist Clinic, Advanced Medical and Dental Institute, Universiti Sains Malaysia. The measurements were carried out using new Malay formula of both Demirjian and Cameriere methods by calibrated examiner. Results: The intraclass correlation coefficient (ICC) analysis between the chronological age with Demirjian and Cameriere has been calculated. The Demirjian method has shown a better percentage (91.4%) of ICC compared to Cameriere (89.2%) which also indicates a high association, with good reliability. However, by comparing between Demirjian and Cameriere, it can be concluded that Demirjian has a better reliability. Conclusion: Thus, the results suggested that, modified Demirjian method is more reliable than modified Cameriere method among the population in Kepala Batas region.
Forde, David R.; Baron, Stephen W.; Scher, Christine D.; Stein, Murray B.
2012-01-01
This study examines the psychometric properties of the Childhood Trauma Questionnaire short form (CTQ-SF) with street youth who have run away or been expelled from their homes (N = 397). Internal reliability coefficients for the five clinical scales ranged from 0.65 to 0.95. Confirmatory Factor Analysis (CFA) was used to test the five-factor…
Kostandyan, Erik; Ma, Ke
2012-01-01
This paper investigates the lifetime of high power IGBTs (insulated gate bipolar transistors) used in large wind turbine applications. Since the IGBTs are critical components in a wind turbine power converter, it is of great importance to assess their reliability in the design phase of the turbin...
Zhongwei Deng
2016-06-01
Full Text Available In the field of state of charge (SOC estimation, the Kalman filter has been widely used for many years, although its performance strongly depends on the accuracy of the battery model as well as the noise covariance. The Kalman gain determines the confidence coefficient of the battery model by adjusting the weight of open circuit voltage (OCV correction, and has a strong correlation with the measurement noise covariance (R. In this paper, the online identification method is applied to acquire the real model parameters under different operation conditions. A criterion based on the OCV error is proposed to evaluate the reliability of online parameters. Besides, the equivalent circuit model produces an intrinsic model error which is dependent on the load current, and the property that a high battery current or a large current change induces a large model error can be observed. Based on the above prior knowledge, a fuzzy model is established to compensate the model error through updating R. Combining the positive strategy (i.e., online identification and negative strategy (i.e., fuzzy model, a more reliable and robust SOC estimation algorithm is proposed. The experiment results verify the proposed reliability criterion and SOC estimation method under various conditions for LiFePO4 batteries.
Smati, A.; Younsi, K.; Zeraibi, N.; Zemmour, N. [Universite de Boumerdes, Faculte des Hydrocarbures, Dept. Transport et Equipement, Boumerdes (Algeria)
2003-07-01
LNG plants are characterized by their relatively low number in the world, diversity of processes involved, very high investment and operating costs. The fuel consumption of this type of facilities (about 15%) may double in given cases, when the frequency of untimely and volunteer shut downs is high. Then, the improvement of the reliability of the LNG chain in its overall will lead objectively to substantial decrease of energy costs. For reparable systems, availability is more often used as reliability indicator. In reliability point of view, the LNG chain must be assimilated to a unique complex system. However, modeling of complex systems, in reliability point of view or other, is always difficult in relation with the large dimensions of the space of phases. In this paper, a systemic approach is used to reduce the space of phases. A representation of subsystems by reliability diagrams permit a more easy calculation of probabilities associated with every phase. A bottom up technique allows the reconstitution of the global model of reliability of the chain. In an environment characterized by its weakness in statistical data, a Bayesian estimation approach is used to define the failure and repair rates of different equipments composing the LNG chain. Some results concerning Algerian LNG chairs Hassi R'mel-Skikda are furnished. (authors)
Glaze, S.; Schneiders, N.; Bushong, S.C.
1982-10-01
A computer program for calculating patient entrance exposure and fetal dose for 11 common radiographic examinations was developed. The output intensity measured at 70 kVp and a 30-inch (76-cm) source-to-skin distance was entered into the program. The change in output intensity with changing kVp was examined for 17 single-phase and 12 three-phase x-ray units. The relationships obtained from a least squares regression analysis of the data, along with the technique factors for each examination, were used to calculate patient exposure. Fetal dose was estimated using published fetal dose in mrad (10/sup -5/ Gy) per 1,000 mR (258 ..mu..C/kg) entrance exposure values. The computations are fully automated and individualized to each radiographic unit. The information provides a ready reference in large institutions and is particularly useful at smaller facilities that do not have available physicists who can make the calculations immediately.
Glaze, S.; Schneiders, N.; Bushong, S.C.
1982-10-01
A computer program for calculating patient entrance exposure and fetal dose for 11 common radiographic examinations was developed. The output intensity measured at 70 kVp and a 30-inch (76-cm) source-to-skin distance was entered into the program. The change in output intensity with changing kVp was examined for 17 single-phase and 12 three-phase x-ray units. The relationships obtained from a least squares regression analysis of the data, along with the technique factors for each examination, were used to calculate patient exposure. Fetal dose was estimated using published fetal dose in mrad (10(-5) Gy) per 1,000 mR (258 microC/kg) entrance exposure values. The computations are fully automated and individualized to each radiographic unit. The information provides a ready reference in large institutions and is particularly useful at smaller facilities that do not have available physicians who can make the calculations immediately.
Hwang, H.-L.; Rollow, J.
2000-05-01
The 1995 American Travel Survey (ATS) collected information from approximately 80,000 U.S. households about their long distance travel (one-way trips of 100 miles or more) during the year of 1995. It is the most comprehensive survey of where, why, and how U.S. residents travel since 1977. ATS is a joint effort by the U.S. Department of Transportation (DOT) Bureau of Transportation Statistics (BTS) and the U.S. Department of Commerce Bureau of Census (Census); BTS provided the funding and supervision of the project, and Census selected the samples, conducted interviews, and processed the data. This report documents the technical support for the ATS provided by the Center for Transportation Analysis (CTA) in Oak Ridge National Laboratory (ORNL), which included the estimation of trip distances as well as data quality editing and checking of variables required for the distance calculations.
Estimation of land surface evapotranspiration with A satellite remote sensing procedure
Irmak, A.; Ratcliffe, I.; Ranade, P.; Hubbard, K.G.; Singh, R.K.; Kamble, B.; Kjaersgaard, J.
2011-01-01
There are various methods available for estimating magnitude and trends of evapotranspiration. Bowen ratio energy balance system and eddy correlation techniques offer powerful alternatives for measuring land surface evapotranspiration. In spite of the elegance, high accuracy, and theoretical attractions of these techniques for measuring evapotranspiration, their practical use over large areas can be limited due to the number of sites needed and the related expense. Application of evapotranspiration mapping from satellite measurements can overcome the limitations. The objective of this study was to utilize the METRICTM (Mapping Evapotranspiration at High Resolution using Internalized Calibration) model in Great Plains environmental settings to understand water use in managed ecosystems on a regional scale. We investigated spatiotemporal distribution of a fraction of reference evapotranspiration (ETrF) using eight Landsat 5 images during the 2005 and 2006 growing season for path 29, row 32. The ETrF maps generated by METRICTM allowed us to follow the magnitude and trend in ETrF for major land-use classes during the growing season. The ETrF was lower early in the growing season for agricultural crops and gradually increased as the normalized difference vegetation index of crops increased, thus presenting more surface area over which water could transpire toward the midseason. Comparison of predictions with Bowen ratio energy balance system measurements at Clay Center, NE, showed that METRICTM performed well at the field scale for predicting evapotranspiration from a cornfield. If calibrated properly, the model could be a viable tool to estimate water use in managed ecosystems in subhumid climates at a large scale.
Estabrook, Ryne; Neale, Michael
2013-01-01
Factor score estimation is a controversial topic in psychometrics, and the estimation of factor scores from exploratory factor models has historically received a great deal of attention. However, both confirmatory factor models and the existence of missing data have generally been ignored in this debate. This article presents a simulation study…
Rae, Gordon
2008-11-01
Several authors have suggested that prior to conducting a confirmatory factor analysis it may be useful to group items into a smaller number of item 'parcels' or 'testlets'. The present paper mathematically shows that coefficient alpha based on these parcel scores will only exceed alpha based on the entire set of items if W, the ratio of the average covariance of items between parcels to the average covariance of items within parcels, is greater than unity. If W is less than unity, however, and errors of measurement are uncorrelated, then stratified alpha will be a better lower bound to the reliability of a measure than the other two coefficients. Stratified alpha are also equal to the true reliability of a test when items within parcels are essentially tau-equivalent if one assumes that errors of measurement are not correlated.
Skafte, Anders; Aenlle, Manuel L.; Brincker, Rune
2016-02-01
Measurement systems are being installed in more and more civil structures with the purpose of monitoring the general dynamic behavior of the structure. The instrumentation is typically done with accelerometers, where experimental frequencies and mode shapes can be identified using modal analysis and used in health monitoring algorithms. But the use of accelerometers is not suitable for all structures. Structures like wind turbine blades and wings on airplanes can be exposed to lightning, which can cause the measurement systems to fail. Structures like these are often equipped with fiber sensors measuring the in-plane deformation. This paper proposes a method in which the displacement mode shapes and responses can be predicted using only strain measurements. The method relies on the newly discovered principle of local correspondence, which states that each experimental mode can be expressed as a unique subset of finite element modes. In this paper the technique is further developed to predict the mode shapes in different states of the structure. Once an estimate of the modes is found, responses can be predicted using the superposition of the modal coordinates weighted by the mode shapes. The method is validated with experimental tests on a scaled model of a two-span bridge installed with strain gauges. Random load was applied to simulate a civil structure under operating condition, and strain mode shapes were identified using operational modal analysis.
Nemo, Alessandro; Silvestri, Stefano
2014-11-01
A pleural mesothelioma arose in an employee of a wine farm whose work history shows an unusual occupational exposure to asbestos. The information, gathered directly from the case and from a work colleague, clarifies some aspects of the use of asbestos in the process of winemaking which has not been previously reported in such details. The man had worked as a winemaker from 1960 to 1988 in a farm, which in those years produced around 2500 hectoliters of wine per year, mostly white. The wine was filtered to remove impurities; the filter was created by dispersing in the wine asbestos fibers followed by diatomite while the wine was circulating several times and clogging a prefilter made of a dense stainless steel net. Chrysotile asbestos was the sole asbestos mineralogical variety used in these filters and exposure could occur during the phase of mixing dry fibers in the wine and during the filter replacement. A daily and annual time weighted average level of exposure and cumulative dose have been estimated in the absence of airborne asbestos fiber monitoring performed in that workplace. Since 1993, the Italian National Mesothelioma Register, an epidemiological surveillance system, has recorded eight cases with at least one work period spent as winemaker. Four of them never used asbestos filters and presented exposures during other work periods, the other four used asbestos filters but had also other exposures in other industrial divisions. For the information hitherto available, this is the first mesothelioma case with exclusive exposure in the job of winemaking.
Constraints on LISA Pathfinder's self-gravity: design requirements, estimates and testing procedures
Ferroni, Valerio
2016-01-01
LISA Pathfinder satellite has been launched on 3th December 2015 toward the Sun-Earth first Lagrangian point (L1) where the LISA Technology Package (LTP), which is the main science payload, will be tested. With its cutting-edge technology, the LTP will provide the ability to achieve unprecedented geodesic motion residual acceleration measurements down to the order of $3 \\times 10^{-14}\\,\\mathrm{m/s^2/{Hz^{1/2}}}$ within the $1-30\\,\\mathrm{mHz}$ frequency band. The presence of the spacecraft itself is responsible of the local gravitational field which will interact with the two proof test-masses. Potentially, such a force interaction might prevent to achieve the targeted free-fall level originating a significant source of noise. We balanced this gravitational force with sub $\\mathrm{nm/s^2}$ accuracy, guided by a protocol based on measurements of the position and the mass of all parts that constitute the satellite, via finite element calculation tool estimates. In the following, we will introduce requirements,...
Constraints on LISA Pathfinder’s self-gravity: design requirements, estimates and testing procedures
Armano, M.; Audley, H.; Auger, G.; Baird, J.; Binetruy, P.; Born, M.; Bortoluzzi, D.; Brandt, N.; Bursi, A.; Caleno, M.; Cavalleri, A.; Cesarini, A.; Cruise, M.; Danzmann, K.; de Deus Silva, M.; Desiderio, D.; Piersanti, E.; Diepholz, I.; Dolesi, R.; Dunbar, N.; Ferraioli, L.; Ferroni, V.; Fitzsimons, E.; Flatscher, R.; Freschi, M.; Gallegos, J.; García Marirrodriga, C.; Gerndt, R.; Gesa, L.; Gibert, F.; Giardini, D.; Giusteri, R.; Grimani, C.; Grzymisch, J.; Harrison, I.; Heinzel, G.; Hewitson, M.; Hollington, D.; Hueller, M.; Huesler, J.; Inchauspé, H.; Jennrich, O.; Jetzer, P.; Johlander, B.; Karnesis, N.; Kaune, B.; Korsakova, N.; Killow, C.; Lloro, I.; Liu, L.; López-Zaragoza, J. P.; Maarschalkerweerd, R.; Madden, S.; Mance, D.; Martín, V.; Martin-Polo, L.; Martino, J.; Martin-Porqueras, F.; Mateos, I.; McNamara, P. W.; Mendes, J.; Mendes, L.; Moroni, A.; Nofrarias, M.; Paczkowski, S.; Perreur-Lloyd, M.; Petiteau, A.; Pivato, P.; Plagnol, E.; Prat, P.; Ragnit, U.; Ramos-Castro, J.; Reiche, J.; Romera Perez, J. A.; Robertson, D.; Rozemeijer, H.; Rivas, F.; Russano, G.; Sarra, P.; Schleicher, A.; Slutsky, J.; Sopuerta, C. F.; Sumner, T.; Texier, D.; Thorpe, J. I.; Tomlinson, R.; Trenkel, C.; Vetrugno, D.; Vitale, S.; Wanner, G.; Ward, H.; Warren, C.; Wass, P. J.; Wealthy, D.; Weber, W. J.; Wittchen, A.; Zanoni, C.; Ziegler, T.; Zweifel, P.
2016-12-01
LISA Pathfinder satellite was launched on 3 December 2015 toward the Sun-Earth first Lagrangian point (L1) where the LISA Technology Package (LTP), which is the main science payload, will be tested. LTP achieves measurements of differential acceleration of free-falling test masses (TMs) with sensitivity below 3× {10}-14 {{m}} {{{s}}}-2 {{Hz}}-1/2 within the 1-30 mHz frequency band in one-dimension. The spacecraft itself is responsible for the dominant differential gravitational field acting on the two TMs. Such a force interaction could contribute a significant amount of noise and thus threaten the achievement of the targeted free-fall level. We prevented this by balancing the gravitational forces to the sub nm s-2 level, guided by a protocol based on measurements of the position and the mass of all parts that constitute the satellite, via finite element calculation tool estimates. In this paper, we will introduce the gravitational balance requirements and design, and then discuss our predictions for the balance that will be achieved in flight.
SPSS and SAS procedures for estimating indirect effects in simple mediation models.
Preacher, Kristopher J; Hayes, Andrew F
2004-11-01
Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.
Reliability of Circumplex Axes
Micha Strack
2013-06-01
Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.
Weibull Parameters Estimation Based on Physics of Failure Model
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....
Effective dose estimation for oncological and neurological PET/CT procedures.
Martí-Climent, Josep M; Prieto, Elena; Morán, Verónica; Sancho, Lidia; Rodríguez-Fraile, Macarena; Arbizu, Javier; García-Velloso, María J; Richter, José A
2017-12-01
The aim of this study was to retrospectively evaluate the patient effective dose (ED) for different PET/CT procedures performed with a variety of PET radiopharmaceutical compounds. PET/CT studies of 210 patients were reviewed including Torso (n = 123), Whole body (WB) (n = 36), Head and Neck Tumor (HNT) (n = 10), and Brain (n = 41) protocols with (18)FDG (n = 170), (11)C-CHOL (n = 10), (18)FDOPA (n = 10), (11)C-MET (n = 10), and (18)F-florbetapir (n = 10). ED was calculated using conversion factors applied to the radiotracer activity and to the CT dose-length product. Total ED (mean ± SD) for Torso-(11)C-CHOL, Torso-(18)FDG, WB-(18)FDG, and HNT-(18)FDG protocols were 13.5 ± 2.2, 16.5 ± 4.5, 20.0 ± 5.6, and 15.4 ± 2.8 mSv, respectively, where CT represented 77, 62, 69, and 63% of the protocol ED, respectively. For (18)FDG, (18)FDOPA, (11)C-MET, and (18)F-florbetapir brain PET/CT studies, ED values (mean ± SD) were 6.4 ± 0.6, 4.6 ± 0.4, 5.2 ± 0.5, and 9.1 ± 0.4 mSv, respectively, and the corresponding CT contributions were 11, 14, 23, and 26%, respectively. In (18)FDG PET/CT, variations in scan length and arm position produced significant differences in CT ED (p PET/CT protocols with different radiopharmaceuticals ranged between 4.6 and 20.0 mSv. The major contributor to total ED for body protocols is CT, whereas for brain studies, it is the PET radiopharmaceutical.
Estimating cross-price elasticity of e-cigarettes using a simulated demand procedure.
Grace, Randolph C; Kivell, Bronwyn M; Laugesen, Murray
2015-05-01
Our goal was to measure the cross-price elasticity of electronic cigarettes (e-cigarettes) and simulated demand for tobacco cigarettes both in the presence and absence of e-cigarette availability. A sample of New Zealand smokers (N = 210) completed a Cigarette Purchase Task to indicate their demand for tobacco at a range of prices. They sampled an e-cigarette and rated it and their own-brand tobacco for favorability, and indicated how many e-cigarettes and regular cigarettes they would purchase at 0.5×, 1×, and 2× the current market price for regular cigarettes, assuming that the price of e-cigarettes remained constant. Cross-price elasticity for e-cigarettes was estimated as 0.16, and was significantly positive, indicating that e-cigarettes were partially substitutable for regular cigarettes. Simulated demand for regular cigarettes at current market prices decreased by 42.8% when e-cigarettes were available, and e-cigarettes were rated 81% as favorably as own-brand tobacco. However when cigarettes cost 2× the current market price, significantly more smokers said they would quit (50.2%) if e-cigarettes were not available than if they were available (30.0%). Results show that e-cigarettes are potentially substitutable for regular cigarettes and their availability will reduce tobacco consumption. However, e-cigarettes may discourage smokers from quitting entirely as cigarette price increases, so policy makers should consider maintaining a constant relative price differential between e-cigarettes and tobacco cigarettes. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Höing, Andrea; Quinten, Marcel C; Indrawati, Yohana Maria; Cheyne, Susan M; Waltert, Matthias
2013-02-01
Estimating population densities of key species is crucial for many conservation programs. Density estimates provide baseline data and enable monitoring of population size. Several different survey methods are available, and the choice of method depends on the species and study aims. Few studies have compared the accuracy and efficiency of different survey methods for large mammals, particularly for primates. Here we compare estimates of density and abundance of Kloss' gibbons (Hylobates klossii) using two of the most common survey methods: line transect distance sampling and triangulation. Line transect surveys (survey effort: 155.5 km) produced a total of 101 auditory and visual encounters and a density estimate of 5.5 gibbon clusters (groups or subgroups of primate social units)/km(2). Triangulation conducted from 12 listening posts during the same period revealed a similar density estimate of 5.0 clusters/km(2). Coefficients of variation of cluster density estimates were slightly higher from triangulation (0.24) than from line transects (0.17), resulting in a lack of precision in detecting changes in cluster densities of triangulation and triangulation method also may be appropriate.
Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard
2014-01-01
Wave energy power plants are expected to become one of the major future contribution to the sustainable electricity production. Optimal design of wave energy power plants is associated with modeling of physical, statistical, measurement and model uncertainties. This paper presents stochastic models....... The stochastic model for extreme value estimation covers annual extreme value distributions and the statistical uncertainty due to limited amount of available data. Furthermore, updating based on new available data is explained based on a Bayesian approach. The statistical uncertainties are estimated based...... on the Maximum-Likelihood method, and the extreme value estimation uses the peaks-over-threshold (POT) method. Two generic examples of reliability assessments for failure due to fatigue and extreme...
Andreas Berner; Tariq Saleem Alharbi; Eric Carlstrm; Amir Khorram-Manesh
2015-01-01
Objective: To develop a validated and generalized collaborative tool to be utilized by high reliability organizations in order to conduct common resource assessment before major events and mass gatherings.Methods:The Swedish resource and risk estimation guide was used as foundation for the development of the generalized collaborative tool, by three different expert groups, and then analyzed. Analysis of inter-rater reliability was conducted through simulated cases that showed weighted and unweight κ-statistics.Results:The results revealed a mean of unweight κ-value from the three cases of 0.44 and a mean accuracy of 61% of the tool.Conclusions:A better collaboration ability and more accurate resource assessment with acceptable reliability and validity were shown in this study to be used as a foundation for resource assessment before major events/mass-gathering in a simulated environment. However, the result also indicates the challenges of creating measurable values from simulated cases. A study on real events can provide higher reliability but needs, on the other hand, an already developed tool.
Aurich, Nathassia K; Alves Filho, José O; Marques da Silva, Ana M; Franco, Alexandre R
2015-01-01
With resting-state functional MRI (rs-fMRI) there are a variety of post-processing methods that can be used to quantify the human brain connectome. However, there is also a choice of which preprocessing steps will be used prior to calculating the functional connectivity of the brain. In this manuscript, we have tested seven different preprocessing schemes and assessed the reliability between and reproducibility within the various strategies by means of graph theoretical measures. Different preprocessing schemes were tested on a publicly available dataset, which includes rs-fMRI data of healthy controls. The brain was parcellated into 190 nodes and four graph theoretical (GT) measures were calculated; global efficiency (GEFF), characteristic path length (CPL), average clustering coefficient (ACC), and average local efficiency (ALE). Our findings indicate that results can significantly differ based on which preprocessing steps are selected. We also found dependence between motion and GT measurements in most preprocessing strategies. We conclude that by using censoring based on outliers within the functional time-series as a processing, results indicate an increase in reliability of GT measurements with a reduction of the dependency of head motion.
Legros Jean C.
2016-01-01
Full Text Available We developed an experimental setup equipped with tube furnace for continuous heating of heterogeneous drops at a constant temperature and high-speed camera to study characteristics of phase transitions at interfaces of these drops. Also, an experimental procedure was proposed to estimate time characteristics of processes occurring when heated heterogeneous drops in high-temperature environment. As an example, at temperature of heating of 1373 K, lifetime of 15 μl water drop with 1 mm solid inclusion made of natural graphite in high-temperature environment equals almost to 1 s. Experimental data also enabled to reveal minimum temperature at which intensive vaporization of 5 μl, 10 μl and 15 μl drops with inclusions in size of 2×2×2 mm proceeds with explosive breakup. This temperature equals to 803 ± 10 K depending on initial water volume in heterogeneous drops.
Fotilas, Panayiotis; Batzias, Athanasios F.
2009-08-01
A methodological framework designed/developed under the form of an algorithmic procedure (including 20 activity stages and 10 decision nodes) has been applied for multicriteria ranking of models. The criteria used are: fitting to experimental data, agreement with theoretical aspects, model simplicity, experimental falsifiability, progressiveness, and relation to other ISs, as proved by a common path/rationale of deduction. An implementation is presented referring to the selection of pore ideal structure of anodized aluminium among the alternatives: cylindrical (A1), truncated-cone-like (A2), trumpet-like (A3), vesica-like (A4), multiple-base (A5), and tilted-cylinder-like (A6). The alternative A2 (implying corresponding specific surface estimation of the anodic film) was ranked first and the solution was proved to be robust.
Reliability Estimation for Rolling Bearings Based on Virtual Information%基于虚拟信息的滚动轴承可靠性估计
楼洪梁; 陈磊; 李兴林; 但召江; 陈炳顺
2015-01-01
为了提高轴承截尾时间点的可靠度估计的可信度与稳定性，在滚动轴承截尾试验中出现无失效数据时，提出了在每个截尾时间点的可靠度估计过程中引入前一个截尾时间点无失效样本的虚拟失效信息的可靠度计算方法。经实例分析证明，在不同超参数取值下，采用该方法得到的特征寿命和形状参数估计值波动最小，具有更好的稳定性。%In order to improve credibility and stability of reliability estimation for censored time point of bearings,when the zero -failure data appeared in rolling bearing censored test,the virtual failure information of zero -failure sample in previous censored time point is introduced during reliability estimation process for each censored time point.The exam-ple analysis proof that the estimation value of characteristics life and shape parameter has the smallest fluctuation under different hyper parameters,and the method is better than other methods in stability.
Alzimamil, K.; Babikir, E.; Alkhorayef, M. [King Saud University, College of Applied Medical Sciences, Radiological Sciences Department, P. O. Box 10219, Riyadh 11433, (Saudi Arabia); Sulieman, A. [Salman bin Abdulaziz University, College of Applied Medical Sciences, Radiology and Medical Imaging Department, P. O. Box 422, Alkharj (Saudi Arabia); Alsafi, K. [King Abdulaziz University, Faculty of Medicine, Radiology Department, Jeddah 22254 (Saudi Arabia); Omer, H., E-mail: kalzimami@ksu.edu.sa [Dammam University, Faculty of Medicine, Dammam Khobar Coastal Rd, Khobar 31982 (Saudi Arabia)
2014-08-15
Hysterosalpingography (HSG) is the most frequently used diagnostic tool to evaluate the endometrial cavity and fallopian tube by using conventional x-ray or fluoroscopy. Determination of the patient radiation doses values from x-ray examinations provides useful guidance on where best to concentrate efforts on patient dose reduction in order to optimize the protection of the patients. The aims of this study were to measure the patients entrance surface air kerma doses (ESA K), effective doses and to compare practices between different hospitals in Sudan. ESA K were measured for patient using calibrated thermo luminance dosimeters (TLDs, Gr-200A). Effective doses were estimated using National Radiological Protection Board (NRPB) software. This study was conducted in five radiological departments: Two Teaching Hospitals (A and D), two private hospitals (B and C) and one University Hospital (E). The mean ESD was 20.1 mGy, 28.9 mGy, 13.6 mGy, 58.65 mGy, 35.7, 22.4 and 19.6 mGy for hospitals A,B,C,D, and E), respectively. The mean effective dose was 2.4 mSv, 3.5 mSv, 1.6 mSv, 7.1 mSv and 4.3 mSv in the same order. The study showed wide variations in the ESDs with three of the hospitals having values above the internationally reported values. Number of x-ray images, fluoroscopy time, operator skills x-ray machine type and clinical complexity of the procedures were shown to be major contributors to the variations reported. Results demonstrated the need for standardization of technique throughout the hospital. The results also suggest that there is a need to optimize the procedures. Local DRLs were proposed for the entire procedures. (author)
Simplified Procedure for Estimating Epitaxy of La2Zr2O7-Buffered NiW RABITS Using XRD
Rikel, Mark O. [Nexans Superconductors; Isfort, Dirk [Nexans Superconductors; Klein, Marcel [Nexans Superconductors; Ehrenberg, Jurgen [Nexans Superconductors; Bock, Joachim [Nexans Superconductors; Specht, Eliot D [ORNL; Sun-Wagener, Ming [Fraunhofer-Institut fur Silicatforschung, Wurzburg; Weber, Oxana [Fraunhofer-Institut fur Silicatforschung, Wurzburg; Sporn, Dieter [Fraunhofer-Institut fur Silicatforschung, Wurzburg; Engel, Sebastian [Evico; de Haas, Oliver [Evico; Semerad, Robert [Theva Dunnschichttechnik, Germany; Schubert, Margitta [IFW Dresden; Holzapfel, Bernhard [IFW Dresden
2009-01-01
Abstract A procedure is developed for assessing the epitaxy of La(2-x)Zr(2+x)O(7) (LZO) layers on NiW RABITS. Comparing XRD patterns (theta / 2-theta scans and 2D rocking curves) of LZO films of known thickness (ellipsometry or reflectometry measurements) with those of standard samples (100% epitaxial LZO film and an isotropic LZO pellet of known density), we estimate the epitaxial (EF), and polycrystalline (PF) fractions of LZO within the layer. The procedure was tested using MOD-LZO(100 nm)/NiW tape samples with varied from 3 to 90% (reproducibly prepared by varying the humidity of Ar-5%H2 gas during heat treatment). A qualitative agreement with RHEED and quantitative (within 10%) agreement with the EBSD results was shown. Correlation between EF and Jc in 600 nm thick YBCO layer deposited on MOD-LZO/NiW using thermal coevaporation enables us to impose the EF=80% margin on the quality of LZO layer for the particular conductor architecture.
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.
Liu, Yan; Wu, Amery D.; Zumbo, Bruno D.
2010-01-01
In a recent Monte Carlo simulation study, Liu and Zumbo showed that outliers can severely inflate the estimates of Cronbach's coefficient alpha for continuous item response data--visual analogue response format. Little, however, is known about the effect of outliers for ordinal item response data--also commonly referred to as Likert, Likert-type,…
Liu, Yan; Wu, Amery D.; Zumbo, Bruno D.
2010-01-01
In a recent Monte Carlo simulation study, Liu and Zumbo showed that outliers can severely inflate the estimates of Cronbach's coefficient alpha for continuous item response data--visual analogue response format. Little, however, is known about the effect of outliers for ordinal item response data--also commonly referred to as Likert, Likert-type,…
J. Piątkowski
2012-12-01
Full Text Available Purpose: The main purpose of the study was to determine methodology for estimation of the operational reliability based on the statistical results of abrasive wear testing.Design/methodology/approach: For research, a traditional tribological system, i.e. a friction pair of the AlSi17CuNiMg silumin in contact with the spheroidal graphite cast iron of EN-GJN-200 grade, was chosen. Conditions of dry friction were assumed. This system was chosen based on mechanical cooperation between the cylinder (silumin and piston rings (spheroidal graphite cast iron in conventional internal combustion piston engines with spark ignition.Findings: Using material parameters of the cylinder and piston rings, nominal losses qualifying the cylinder for repair and the maximum weight losses that can be smothered were determined. Based on the theoretical number of engine revolutions to repair and stress acting on the cylinder bearing surface, the maximum distance that the motor vehicle can travel before the seizure of the cylinder occurs was calculated. These results were the basis for statistical analysis carried out with the Weibull modulus, the end result of which was the estimation of material reliability (the survival probability of tribological system and the determination of a pre-operation warranty period of the tribological system.Research limitations/implications: The analysis of Weibull distribution modulus will estimate the reliability of a tribological cylinder-ring system enabled the determination of an approximate theoretical time of the combustion engine failure-free running.Originality/value: The results are valuable statistical data and methodology proposed in this paper can be used to determine a theoretical life time of the combustion engine.
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Watterson, J.R.
1985-01-01
The presence of bacterial spores of the Bacillus cereus group in soils and stream sediments appears to be a sensitive indicator of several types of concealed mineral deposits, including vein-type gold deposits. The B. cereus assay is rapid, inexpensive, and inherently reproducible. The test, currently under investigation for its potential in mineral exploration, is recommended for use on a research basis. Among the aerobic spore-forming bacilli, only B. cereus and closely related strains produce an opaque zone in egg-yolk emulsion agar. This characteristic, also known as the Nagler of lecitho-vitellin reaction, has long been used to rapidly indentify and estimate presumptive B. cereus. The test is here adapted to permit rapid estimation of B. cereus spores in soil and stream-sediment samples. Relative standard deviation was 10.3% on counts obtained from two 40-replicate pour-plate determinations. As many as 40 samples per day can be processed. Enough procedural detail is included to permit investigation of the test in conventional geochemical laboratories using standard microbiological safety precautions. ?? 1985.
El-Minshawy Osama
2010-01-01
Full Text Available Glomerular Filtration Rate (GFR is considered the best overall index of renal function currently used. Measurement of 24 hours urine/plasma creatinine ratio (UV/P is usually used for estimation of GFR. However little is known about its accuracy in different stages of Chronic Kidney Disease (CKD aim: is to evaluate performance of UV/P in classification of CKD by comparing it with isotopic GFR (iGFR. 136 patients with CKD were enrolled in this study 80 (59% were males, 48 (35% were diabetics. Mean age 46 ± 13. Creatinine Clearance (Cr.Cl estimated by UV/P and Cockroft-Gault (CG was done for all patients, iGFR was the reference value. Accuracy of UV/P was 10%, 31%, 49% within ± 10%, ± 30%, ± 50% error respectively, r 2 = 0.44. CG gave a better performance even when we restrict our analysis to diabetics only, the accuracy of CG was 19%, 47%, 72% in ± 10%, ± 30% and ± 50% errors respectively, r 2 = 0.63. Both equations gave poor classification of CKD. In conclusion, UV/P has poor accuracy in estimation of GFR, The accuracy worsened as kidney disease becomes more severe. We conclude 24 hours CrCl. is not good substitute for measurement of GFR in patients with CKD.
Petersen, G I; Stein, H H
2006-08-01
An experiment was conducted to evaluate a novel procedure for estimating endogenous losses of P and for measuring the apparent total tract digestibility (ATTD) and true total tract digestibility (TTTD) of P in 5 inorganic P sources fed to growing pigs. The P sources were dicalcium phosphate (DCP), monocalcium phosphate (MCP) with 50% purity (MCP50), MCP with 70% purity (MCP70), MCP with 100% purity (MCP100), and monosodium phosphate (MSP). A gelatin-based, P-free basal diet was formulated and used to estimate endogenous losses of P. Five P-containing diets were formulated by adding 0.20% total P from each of the inorganic P sources to the basal diet. A seventh diet was formulated by adding 0.16% P from MCP70 to the basal diet. All diets were fed to 7 growing pigs in a 7 x 7 Latin square design, and urine and feces were collected during 5 d of each period. The endogenous loss of P was estimated as 139 +/- 18 mg/kg of DMI. The ATTD of P in MSP was greater (P DCP, MCP50, and MCP70 (91.9 vs. 81.5, 82.6, and 81.7%, respectively). In MSP, the TTTD of P was 98.2%. This value was greater (P DCP, MCP50, and MCP70 (88.4, 89.5, and 88.6%, respectively). The ATTD and the TTTD for MCP70 were similar in diets formulated to contain 0.16 and 0.20% total P. Results from the current experiment demonstrate that a P-free diet may be used to measure endogenous losses of P in pigs. By adding inorganic P sources to this diet, the ATTD of P can be directly measured and the TTTD of P may be calculated for each source of P.
Perez Sanchez-Canete, Enrique; Scott, Russell L.; Barron-Gafford, Greg; van Haren, Joost
2016-04-01
Soil CO2 fluxes represent a major source of CO2 emissions, where small changes in their estimation provoke large changes in the quantification of the global carbon cycle. Recently, the gradient method that employs soil CO2 probes at multiple depths has been offered as a way to inexpensively and continuously measure soil CO2 flux. However, the use of the gradient method can yield inappropriate flux estimates due to the uncertainties mainly associated with the inappropriate determination of the soil diffusion coefficient. Therefore, in-situ methods to determine diffusion coefficient are necessary to obtain accurate CO2 fluxes. Here the data obtained during one year with two automatic soil CO2 chambers along with CO2 molar fraction data from 4 probes at 10 cm depth, were used to determine a model of soil diffusion coefficient (Ds), which was applied later to obtain the soil CO2 fluxes by the gradient method. Another Ds model was obtained by injection and sampling of SF6 during several campaigns with different soil water content levels. Both Ds models obtained in situ were compared with another 13 Ds models published. We addressed three questions: 1) Can we use a previously published model, or do we need to determine Ds in situ? 2) How accurate are the CO2 fluxes estimates obtained by the gradient method for different Ds models, compared with chamber-measured CO2 fluxes? 3) Can we take a limited number of chamber measurements to obtain a good Ds model, or we need longer calibration periods? Comparing the cumulative soil respiration for the different diffusion models, we found that the model with empirical calibration to the soil chambers had the best agreement with the chamber fluxes (SF6 model underestimated by chamber fluxes by 23% and the published models ranged from an underestimate of 78% to an overestimate of 14%. Most importantly, we found that a few days of measurements with a soil respiration chamber (with widely varying soil water content) are enough to build
Michael O. Harris-Love
2016-02-01
Full Text Available Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT and the Free Hand Tool (FHT are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years. Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs and the standard error of the measurement (SEM. Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R2. Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97–.99, p < .001. Mean differences between the echogenicity estimates obtained with the RMT and FHT methods was .87 grayscale levels (95% CI [.54–1.21], p < .0001 using data obtained with both programs. The SEM for Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection
N. Poornima
2013-01-01
Full Text Available This work projects photoluminescence (PL as an alternative technique to estimate the order of resistivity of zinc oxide (ZnO thin films. ZnO thin films, deposited using chemical spray pyrolysis (CSP by varying the deposition parameters like solvent, spray rate, pH of precursor, and so forth, have been used for this study. Variation in the deposition conditions has tremendous impact on the luminescence properties as well as resistivity. Two emissions could be recorded for all samples—the near band edge emission (NBE at 380 nm and the deep level emission (DLE at ~500 nm which are competing in nature. It is observed that the ratio of intensities of DLE to NBE (/ can be reduced by controlling oxygen incorporation in the sample. - measurements indicate that restricting oxygen incorporation reduces resistivity considerably. Variation of / and resistivity for samples prepared under different deposition conditions is similar in nature. / was always less than resistivity by an order for all samples. Thus from PL measurements alone, the order of resistivity of the samples can be estimated.
Harris-Love, Michael O; Seamon, Bryant A; Teixeira, Carla; Ismail, Catheeja
2016-01-01
Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT) and the Free Hand Tool (FHT) are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI) within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years). Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs) and the standard error of the measurement (SEM). Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R (2)). Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97-.99, p ImageJ. Uniform coefficients of determination (R (2) = .96-.99, p ImageJ are suitable for the post-acquisition image analysis of tissue echogenicity in older adults.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-06-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence.
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
Rebeca Mora
2007-06-01
Full Text Available A reliable bioassay procedure was developed to test ingested Bacillus thuringiensis (Bt toxins on the rice delphacid Tagosodes orizicolus. Initially, several colonies were established under greenhouse conditions, using rice plants to nurture the insect. For the bioassay, an in vitro feeding system was developed for third to fourth instar nymphs. Insects were fed through Parafilm membranes on sugar (10 % sucrose and honey bee (1:48 vol/vol solutions, observing a natural mortality of 10-15 % and 0-5 %, respectively. Results were reproducible under controlled conditions during the assay (18±0.1 °C at night and 28±0.1 °C during the day, 80 % RH and a 12:12 day:light photoperiod. In addition, natural mortality was quantified on insect colonies, collected from three different geographic areas of Costa Rica, with no significant differences between colonies under controlled conditions. Finally, bioassays were performed to evaluate the toxicity of a Bt collection on T. orizicolus. A preliminary sample of twenty-seven Bt strains was evaluated on coarse bioassays using three loops of sporulated colonies in 9 ml of liquid diet, the strains that exhibited higher percentages of T. orizicolus mortality were further analyzed in bioassays using lyophilized spores and crystals (1 mg/ml. As a result, strains 26-O-to, 40-X-m, 43S-d and 23-O-to isolated from homopteran insects showed mortalities of 74, 96, 44 and 82 % respectively while HD-137, HD-1 and Bti showed 19, 83 and 95 % mortalities. Controls showed mortalities between 0 and 10 % in all bioassays. This is the first report of a reliable bioassay procedure to evaluate per os toxicity for a homopteran species using Bacillus thuringiensis strains. Rev. Biol. Trop. 55 (2: 373-383. Epub 2007 June, 29.Se desarrolló una metodología de bioensayo para evaluar toxinas de Bacillus thuringiensis (Bt ingeridas por Tagosodes orizicolus, plaga del arroz y vector del virus de la hoja blanca. Se establecieron colonias
Bayesian system reliability assessment under fuzzy environments
Wu, H.-C
2004-03-01
The Bayesian system reliability assessment under fuzzy environments is proposed in this paper. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. The (conventional) Bayes estimation method will be used to create the fuzzy Bayes point estimator of system reliability by invoking the well-known theorem called 'Resolution Identity' in fuzzy sets theory. On the other hand, we also provide the computational procedures to evaluate the membership degree of any given Bayes point estimate of system reliability. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four subproblems for the purpose of simplifying computation. Finally, the subproblems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.
Reliability prediction from burn-in data fit to reliability models
Bernstein, Joseph
2014-01-01
This work will educate chip and system designers on a method for accurately predicting circuit and system reliability in order to estimate failures that will occur in the field as a function of operating conditions at the chip level. This book will combine the knowledge taught in many reliability publications and illustrate how to use the knowledge presented by the semiconductor manufacturing companies in combination with the HTOL end-of-life testing that is currently performed by the chip suppliers as part of their standard qualification procedure and make accurate reliability predictions. Th
DeVries, R. J.; Hann, D. A.; Schramm, H.L.
2015-01-01
This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.
A fast-reliable methodology to estimate the concentration of rutile or anatase phases of TiO2
Zanatta, A. R.
2017-07-01
Titanium-dioxide (TiO2) is a low-cost, chemically inert material that became the basis of many modern applications ranging from, for example, cosmetics to photovoltaics. TiO2 exists in three different crystal phases - Rutile, Anatase and, less commonly, Brookite - and, in most of the cases, the presence or relative amount of these phases are essential to decide the TiO2 final application and its related efficiency. Traditionally, X-ray diffraction has been chosen to study TiO2 and provides both the phases identification and the Rutile-to-Anatase ratio. Similar information can be achieved from Raman scattering spectroscopy that, additionally, is versatile and involves rather simple instrumentation. Motivated by these aspects this work took into account various TiO2 Rutile+Anatase powder mixtures and their corresponding Raman spectra. Essentially, the method described here was based upon the fact that the Rutile and Anatase crystal phases have distinctive phonon features, and therefore, the composition of the TiO2 mixtures can be readily assessed from their Raman spectra. The experimental results clearly demonstrate the suitability of Raman spectroscopy in estimating the concentration of Rutile or Anatase in TiO2 and is expected to influence the study of TiO2-related thin films, interfaces, systems with reduced dimensions, and devices like photocatalytic and solar cells.
Algorithm of communication network reliability combining links, nodes and capacity
无
2005-01-01
The conception of the normalized reliability index weighted by capacity is introduced, which combing the communication capacity, the reliability probability of exchange nodes and the reliability probability of the transmission links,in order to estimate the reliability performance of communication network comprehensively and objectively. To realize the full algebraic calculation, the key problem should be resolved, which is to find an algorithm to calculate all the routes between nodes of a network. A kind of logic algebraic algorithm of network routes is studied and based on this algorithm,the full algebraic algorithm of normalized reliability index weighted by capacity is studied. For this algorithm, it is easy to design program and the calculation of reliability index is finished, which is the foundation of the comprehensive and objective estimation of comnunication networks. The calculation procedure of the algorithm is introduced through typical ex amples and the results verify the algorithm.
Ellwood, Brooks B.
1982-07-01
Flow directions are estimated from the measurement of the magnetic fabric of 106 samples, collected at 18 sites in four welded tuff units in the central San Juan Mountains of southern Colorado. The estimates assume that the tuffs generally flowed directly away from the extrusive vents and that the lineations of magnetic grains within the tuffs represent the flow direction at individual sites. Errors in the estimation may arise from topographic variation, rheomorphism (post-emplacement mass flow) within the tuff, and other factors. Magnetic lineation is defined as the site mean anisotropy of magnetic susceptibility maximum azimuth. A test on the flow directions for individual units is based on the projection of lineation azimuths and their intersection within or near the known source caldera for the tuff. This test is positive for the four units examined. Paleomagnetic results for these tuffs are probably reliable indicators of the geomagnetic field direction in southwest Colorado, during the time (28.2-26.5 Ma) of emplacement.
Improving machinery reliability
Bloch, Heinz P
1998-01-01
This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.
Bhargava, Kapilesh, E-mail: kapilesh_66@yahoo.co.u [Architecture and Civil Engineering Division, Bhabha Atomic Research Center, Trombay, Mumbai 400 085 (India); Mori, Yasuhiro [Graduate School of Environmental Studies, Nagoya University, Nagoya 464-8603 (Japan); Ghosh, A.K. [Reactor Safety Division, Bhabha Atomic Research Center, Trombay, Mumbai 400 085 (India)
2011-05-15
Research highlights: Predictive models for corrosion-induced damages in RC structures. Formulations for time-dependent flexural and shear strengths of corroded RC beams. Methodology for mean and c.o.v. for time-dependent strengths of corroded RC beams. Simple estimation of mean and c.o.v. for flexural strength with loss of bond. - Abstract: The structural deterioration of reinforced concrete (RC) structures due to reinforcement corrosion is a major worldwide problem. Damages to RC structures due to reinforcement corrosion manifest in the form of expansion, cracking and eventual spalling of the cover concrete; thereby resulting in serviceability and durability degradation of such structures. In addition to loss of cover, RC structure may suffer structural damages due to loss of reinforcement cross-sectional area, and loss of bond between corroded reinforcement and surrounding cracked concrete, sometimes to the extent that the structural failure becomes inevitable. This paper forms the first part of a study which addresses time-dependent reliability analyses of RC beams affected by reinforcement corrosion. In this paper initially the predictive models are presented for the quantitative assessment of time-dependent damages in RC beams, recognized as loss of mass and cross-sectional area of reinforcing bar, loss of concrete section owing to the peeling of cover concrete, and loss of bond between corroded reinforcement and surrounding cracked concrete. Then these models have been used to present analytical formulations for evaluating time-dependent flexural and shear strengths of corroded RC beams, based on the standard composite mechanics expressions for RC sections. Further by considering variability in the identified basic variables that could affect the time-dependent strengths of corrosion-affected RC beams, the estimation of statistical descriptions for the time-dependent strengths is presented for a typical simply supported RC beam. The statistical descriptions
Software Reliability: Estimation and Prediction
1992-12-31
5 COMPUTE 0 2 6 9 15 23 DATAVAL 0 3 6 11 16 23 INIT 0 1 3 5 9 15 IU4IRE 0 1 3 4 6 8 INTERI 0 4 1 2 2 3 LOGIC 0 4 10 18 30 50 TOTAL 0 15 29 50 78 122...34 dataval ", "init- "intere", "interi" and "logic" are distinguished in the ERBS project acceptance phase, for example. The number of failures of each type
Reliable Function Approximation and Estimation
2016-08-16
geometric mean inequality for products of three matrices. A Israel, F Krahmer, and R Ward. Linear Algebra and its Applications 488, 2016. 1-12. (O3...standard compressed sensing theory is valid only for a restrictive set of dictionaries, limiting the scope of applications . In this award, the PI developed...low-order interactions. The weighted sparsity model allows for more freedom than linear regression but provides sufficient structure to extend
Jasbir Arora
2016-06-01
Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.
Haupt, Lois J; Kazmi, Faraz; Ogilvie, Brian W; Buckley, David B; Smith, Brian D; Leatherman, Sarah; Paris, Brandy; Parkinson, Oliver; Parkinson, Andrew
2015-11-01
In the present study, we conducted a retrospective analysis of 343 in vitro experiments to ascertain whether observed (experimentally determined) values of Ki for reversible cytochrome P450 (P450) inhibition could be reliably predicted by dividing the corresponding IC₅₀ values by two, based on the relationship (for competitive inhibition) in which Ki = IC₅₀/2 when [S] (substrate concentration) = Km (Michaelis-Menten constant). Values of Ki and IC₅₀ were determined under the following conditions: 1) the concentration of P450 marker substrate, [S], was equal to Km (for IC₅₀ determinations) and spanned Km (for Ki determinations); 2) the substrate incubation time was short (5 minutes) to minimize metabolism-dependent inhibition and inhibitor depletion; and 3) the concentration of human liver microsomes was low (0.1 mg/ml or less) to maximize the unbound fraction of inhibitor. Under these conditions, predicted Ki values, based on IC₅₀/2, correlated strongly with experimentally observed Ki determinations [r = 0.940; average fold error (AFE) = 1.10]. Of the 343 predicted Ki values, 316 (92%) were within a factor of 2 of the experimentally determined Ki values, and only one value fell outside a 3-fold range. In the case of noncompetitive inhibitors, Ki values predicted from IC₅₀/2 values were overestimated by a factor of nearly 2 (AFE = 1.85; n = 13), which is to be expected because, for noncompetitive inhibition, Ki = IC₅₀ (not IC₅₀/2). The results suggest that, under appropriate experimental conditions with the substrate concentration equal to Km, values of Ki for direct, reversible inhibition can be reliably estimated from values of IC₅₀/2.
Donk, Roland D; Fehlings, Michael G; Verhagen, Wim I M; Arnts, Hisse; Groenewoud, Hans; Verbeek, André L M; Bartels, Ronald H M A
2017-05-01
OBJECTIVE Although there is increasing recognition of the importance of cervical spinal sagittal balance, there is a lack of consensus as to the optimal method to accurately assess the cervical sagittal alignment. Cervical alignment is important for surgical decision making. Sagittal balance of the cervical spine is generally assessed using one of two methods; namely, measuring the angle between C-2 and C-7, and drawing a line between C-2 and C-7. Here, the best method to assess sagittal alignment of the cervical spine is investigated. METHODS Data from 138 patients enrolled in a randomized controlled trial (Procon) were analyzed. Two investigators independently measured the angle between C-2 and C-7 by using Harrison's posterior tangent method, and also estimated the shape of the sagittal curve by using a modified Toyama method. The mean angles of each quantitative assessment of the sagittal alignment were calculated and the results were compared. The interrater reliability for both methods was estimated using Cronbach's alpha. RESULTS For both methods the interrater reliability was high: for the posterior tangent method it was 0.907 and for the modified Toyama technique it was 0.984. For a lordotic cervical spine, defined by the modified Toyama method, the mean angle (defined by Harrison's posterior tangent method) was 23.4° ± 9.9° (range 0.4°-52.4°), for a kyphotic cervical spine it was -2.2° ± 9.2° (range -16.1° to 16.9°), and for a straight cervical spine it was 10.5° ± 8.2° (range -11° to 36°). CONCLUSIONS An absolute measurement of the angle between C-2 and C-7 does not unequivocally define the sagittal cervical alignment. As can be seen from the minimum and maximum values, even a positive angle between C-2 and C-7 could be present in a kyphotic spine. For this purpose, the modified Toyama method (drawing a line from the posterior inferior part of the vertebral body of C-2 to the posterior upper part of the vertebral body of C-7 without any
Marek, W; Marek, E; Friz, Y; Vogel, P; Mückenhoff, K; Kotschy-Lang, N
2010-03-01
AIMS OF THE INVESTIGATION: The repetition of the 6-minutes walk test (6 MWT) in older patients is frequently performed in order to document the maximal walking distance, although it is not recommended in any guidelines on exercise tests and although there is common consent to save clinical resources in terms of time and staff. Therefore, we have examined whether and to what extent the repetition of the walk tests helps patients to get more familiar with this kind of exercise test. Thus the acquired physiological data should reliably describe the physical fitness of the patients at the beginning and at the end of their clinical rehabilitation. 35 patients performed their walk tests before and after 3 - 4 weeks of clinical rehabilitation. Each test has been repeated after one hour of recovery. The patients were instructed to walk during 6 minutes as fast as possible. They were equipped with a mobile pulse oximeter for recording oxygen saturation and heart rate. The distance, S, and the heart rate, fc, were measured. Measurements were performed every 30 seconds and recorded. The efficiency, E (E = S/6/fc), was calculated as the ratio of distance per minute and the mean heart rate during the test. In the first test the patients walked 416 +/- 63 m at a heart rate of 104.7 +/- 15.7 beats/min, in the first repeated test 454 +/- 71 m at a heart of 106.3 +/- 17.4 beats/min. In the second test, after clinical therapy, they walked 438 +/- 58 m at a heart rate of 106.3 +/- 17.4 beats/min, in the second repeated test 473 +/- 56 m at 108.6 +/- 13.2/min. The difference of the walking distances of the tests at the entrance were found to be 38.4 +/- 26.2 m (+ 9.3 +/- 6.2%), at the end of clinical rehabilitation 35 +/- 26 m (+ 8.4 +/- 6.4%). Both differences are found to be independent from the distance of the first test. They are not significantly different. The efficiency was not significantly different in the initial and final test (0.673 +/- 0.129 and 0.689 +/- 0.085 m
Borodkin, Gennady; Borodkin, Pavel; Khrennikov, Nikolay; Ryabinin, Yuriy; Adeev, Valeriy
2016-02-01
The Paper describes a new Russian Utility's regulatory document (RD EO) which has been recently developed and implemented since the beginning of 2013. This RD EO includes the procedure of RPV FNF monitoring and provides recommendations on how to predict fluence over the design lifetime taking into account results of FNF monitoring. The basic method of RPV neutron fluence monitoring is neutron transport calculations of FR in the vicinity of the RPV. Reliability of the calculation results should be validated by ex-vessel neutron-activation measurements, which were performed during different fuel cycles with different core loadings including new types of fuel.
Borodkin Gennady
2016-01-01
Full Text Available The Paper describes a new Russian Utility's regulatory document (RD EO which has been recently developed and implemented since the beginning of 2013. This RD EO includes the procedure of RPV FNF monitoring and provides recommendations on how to predict fluence over the design lifetime taking into account results of FNF monitoring. The basic method of RPV neutron fluence monitoring is neutron transport calculations of FR in the vicinity of the RPV. Reliability of the calculation results should be validated by ex-vessel neutron-activation measurements, which were performed during different fuel cycles with different core loadings including new types of fuel.
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
Arcella, D; Leclercq, C
2005-01-01
The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.
Neuenkirch, Andreas
2011-01-01
We study a least square-type estimator for an unknown parameter in the drift coefficient of a stochastic differential equation with additive fractional noise of Hurst parameter H>1/2. The estimator is based on discrete time observations of the stochastic differential equation, and using tools from ergodic theory and stochastic analysis we derive its strong consistency.
Noguchi, Kyotaro; Tanikawa, Toko; Inagaki, Yoshiyuki; Ishizuka, Shigehiro
2017-06-01
Several recent studies have used the net sheet method to estimate fine root production rates in forest ecosystems, wherein net sheets are inserted into the soil and fine roots growing through them are observed. Although this method has advantages in terms of its easy handling and low cost, there are uncertainties in the estimates per unit soil volume or unit stand area, because the net sheet is a two-dimensional material. Therefore, this study aimed to establish calculation procedures for estimating fine root production rates from two-dimensional fine root data on net sheets. This study was conducted in a hinoki cypress (Chamaecyparis obtusa (Sieb. & Zucc.) Endl.) stand in western Japan. We estimated fine root production rates in length and volume from the number (RN) and cross-sectional area (RCSA) densities, respectively, for fine roots crossing the net sheets, which were then converted to dry mass values. For these calculations, we used empirical regression equations or theoretical equations between the RN or RCSA densities on the vertical walls of soil pits and fine root densities in length or volume, respectively, in the soil, wherein the theoretical equations assumed random orientation of the growing fine roots. The estimates of mean fine root (diameter sheets using these calculation procedures, with the empirical regression equations reflecting fine root orientation in the study site. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Meyer, J. Patrick; Liu, Xiang; Mashburn, Andrew J.
2014-01-01
Researchers often use generalizability theory to estimate relative error variance and reliability in teaching observation measures. They also use it to plan future studies and design the best possible measurement procedures. However, designing the best possible measurement procedure comes at a cost, and researchers must stay within their budget…
1981-01-01
The specific objectives of the FY 1980-81 tasks are: (1) further refinements to the weighted aggregation procedure; (2) improved approaches for estimating within-stratum variance; (3) more intensive investigation of alternative sampling strategies such as full-frame sampling strategy, and (4) further developments in regard to a simulated approach for assessing the performance of the overall designed sampling and aggregation system.
Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si
2014-10-24
Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.
Hartzell, Allyson L; Shea, Herbert R
2010-01-01
This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…
焉石; 陈永欣; 李尚滨; 纠延红; 杨俊
2016-01-01
This study analyzes the necessary procedures that should be followed in the reliability and validity test of a scale in sports science research with methods of literature and logical analysis.The results show that the standard procedures in turn are: item analysis, factor analysis, reliability test, convergent validity and discriminant validity test.Only after a scale passes the aforementioned screening respectively,can the subsequent inferential statistic analysis be conducted.%本研究通过运用文献资料法、逻辑分析法对体育科研中针对量表的信效度检验应该遵循的必要程序进行了解析。结果显示，其标准化程序依次为：项目分析、因素分析、信度检验、收敛效度及区别效度检验，只有在量表分别通过以上筛查后，才可以进行后续的推论统计分析。
Bendell, A
1986-01-01
Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo
Ritchie, R O; Lubock, P
1986-05-01
Projected fatigue life analyses are performed to estimate the endurance of a cardiac valve prosthesis under physiological environmental and mechanical conditions. The analyses are conducted using both the classical stress-strain/life and the fracture mechanics-based damage-tolerant approaches, and provide estimates of expected life in terms of initial flaw sizes which may pre-exist in the metal prior to the valve entering service. The damage-tolerant analysis further is supplemented by consideration of the question of "short cracks," which represents a developing area in metal fatigue research, not commonly applied to data in standard engineering design practice.
Beretta, Giangiacomo [Istituto di Chimica Farmaceutica e Tossicologica ' Pietro Pratesi' , Faculty of Pharmacy, University of Milan, via Mangiagalli 25, 20133 Milan (Italy)], E-mail: giangiacomo.beretta@unimi.it; Caneva, Enrico [Ciga - Centro Interdipartimentale Grandi Apparecchiature, University of Milan, via Golgi 19, 20133 Milan (Italy); Regazzoni, Luca; Bakhtyari, Nazanin Golbamaki; Maffei Facino, Roberto [Istituto di Chimica Farmaceutica e Tossicologica ' Pietro Pratesi' , Faculty of Pharmacy, University of Milan, via Mangiagalli 25, 20133 Milan (Italy)
2008-07-14
The aim of this work was to establish an analytical method for identifying the botanical origin of honey, as an alternative to conventional melissopalynological, organoleptic and instrumental methods (gas-chromatography coupled to mass spectrometry (GC-MS), high-performance liquid chromatography HPLC). The procedure is based on the {sup 1}H nuclear magnetic resonance (NMR) profile coupled, when necessary, with electrospray ionisation-mass spectrometry (ESI-MS) and two-dimensional NMR analyses of solid-phase extraction (SPE)-purified honey samples, followed by chemometric analyses. Extracts of 44 commercial Italian honeys from 20 different botanical sources were analyzed. Honeydew, chestnut and linden honeys showed constant, specific, well-resolved resonances, suitable for use as markers of origin. Honeydew honey contained the typical resonances of an aliphatic component, very likely deriving from the plant phloem sap or excreted into it by sap-sucking aphids. Chestnut honey contained the typical signals of kynurenic acid and some structurally related metabolite. In linden honey the {sup 1}H NMR profile gave strong signals attributable to the mono-terpene derivative cyclohexa-1,3-diene-1-carboxylic acid (CDCA) and to its 1-O-{beta}-gentiobiosyl ester (CDCA-GBE). These markers were not detectable in the other honeys, except for the less common nectar honey from rosa mosqueta. We compared and analyzed the data by multivariate techniques. Principal component analysis found different clusters of honeys based on the presence of these specific markers. The results, although obviously only preliminary, suggest that the {sup 1}H NMR profile (with HPLC-MS analysis when necessary) can be used as a reference framework for identifying the botanical origin of honey.
Paradies, Guglielmo; Zullino, Francesca; Orofino, Antonio; Leggio, Samuele
2014-01-01
Extragonadal teratomas are rare tumors in neonates and infants and can sometimes show unusual, distinctive feature such as an unusual location, a clinical sometimes acute, presentation and a "fetiform" histotype of the lesion. We have extrapolated, from our entire experience of teratomas, 4 unusual cases, mostly operated as emergencies; 2 of them were treated just after birth. Aim of this paper is to report the clinical and pathological findings, to evaluate the surgical approach and the long-term biological behaviour in these cases, in the light of survival and current insights reported in the literature. The Authors reviewed the most significant (Tables I and II) clinical, laboratory, radiologic, and pathologic findings, surgical procedures, early and long-term results in 4 children, 1 male and 3 females (M/F ratio: 1/3), suffering from extragonadal teratomas, located in the temporo-zygomatic region of the head (Case n. 1, Fig. 1), retroperitoneal space (Case n. 2, Fig. 2) ,liver (Case n. 3, Figg. 3-5), kidney (Case n. 4, Fig. 6, 7), respectively. Of the 4 patients, 2 were treated neonatally (1 T. of the head, 1 retroperitoneal T.) A prenatal diagnosis had already been made in 2 of the 4 patients, between the 2nd and 3rd trimester of pregnancy, All the infants were born by scheduled caesarean section in a tertiary care hospital and were the immediately referred to thew N.I.C.Us. Because of a mostly acute clinical presentation, the 4 patients were then referred to the surgical unit at different ages: 7 days, 28 days, 7 months, and 4 years respectively. The initial clinical presentation (Table II) was consistent with the site of the mass and/or its side effects. The 2 newborns (Case 1 and 2) both with a prenatally diagnosed mass located at the temporozygomatic region and in the abdominal cavite respectively, already displayed, at birth a mass with a tendency to further growth. The symptoms and signs described to the primary care physician by the parents of the 2
The rating reliability calculator
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials
Kushary, Debashis; Kulkarni, Pandurang M.
1995-01-01
We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.
Comparison between Calibration Procedure and Econometric Estimation%校准方法与计量经济方法的比较
周焯华; 张宗益; 欧阳
2001-01-01
The two methods of estimating parameter in computable general equilibrium(CGE) model are introduced and compared：the calibration procedure and econometric estimation． The conclusions are：the estimation of parameter in CGE model must use the calibration procedure coupled with the econometric estimation method;the elasticity of output with respect to labor input，the marginal expenditure share for households and price elasticity of export demand are estimated by econometric estimation method;and other parameters of the CGE model can be get by calibration procedure．%介绍了可计算的一般均衡(CGE)模型中确定参数的两种主要方法：校准方法与计量经济方法，并对两种方法进行了比较，分别论述它们的优缺点。得到的结论是在CGE模型中进行参数估计时采用“校准”方法和计量经济学方法相结合：对于刻画行为人的行为并对结果有重要影响的参数(如各种弹性)采用计量经济学方法来估计，其他的参数则利用所构造的社会核算矩阵(SAM)通过校准方法而得到。
K. Ishijima
2015-07-01
Full Text Available This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM. We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.
2015-12-01
This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.
François Pimont
2015-06-01
Full Text Available Leaf biomass distribution is a key factor for modeling energy and carbon fluxes in forest canopies and for assessing fire behavior. We propose a new method to estimate 3D leaf bulk density distribution, based on a calibration of indices derived from T-LiDAR. We applied the method to four contrasted plots in a mature Quercus pubescens forest. Leaf bulk densities were measured inside 0.7 m-diameter spheres, referred to as Calibration Volumes. Indices were derived from LiDAR point clouds and calibrated over the Calibration Volume bulk densities. Several indices were proposed and tested to account for noise resulting from mixed pixels and other theoretical considerations. The best index and its calibration parameter were then used to estimate leaf bulk densities at the grid nodes of each plot. These LiDAR-derived bulk density distributions were used to estimate bulk density vertical profiles and loads and above four meters compared well with those assessed by the classical inventory-based approach. Below four meters, the LiDAR-based approach overestimated bulk densities since no distinction was made between wood and leaf returns. The results of our method are promising since they demonstrate the possibility to assess bulk density on small plots at a reasonable operational cost.
Fagbeja, Mofoluso A; Hill, Jennifer L; Chatterton, Tim J; Longhurst, James W S
2015-02-01
An assessment of the reliability of the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) satellite sensor measurements to interpolate tropospheric concentrations of carbon monoxide considering the low-latitude climate of the Niger Delta region in Nigeria was conducted. Monthly SCIAMACHY carbon monoxide (CO) column measurements from January 2,003 to December 2005 were interpolated using ordinary kriging technique. The spatio-temporal variations observed in the reliability were based on proximity to the Atlantic Ocean, seasonal variations in the intensities of rainfall and relative humidity, the presence of dust particles from the Sahara desert, industrialization in Southwest Nigeria and biomass burning during the dry season in Northern Nigeria. Spatial reliabilities of 74 and 42 % are observed for the inland and coastal areas, respectively. Temporally, average reliability of 61 and 55 % occur during the dry and wet seasons, respectively. Reliability in the inland and coastal areas was 72 and 38 % during the wet season, and 75 and 46 % during the dry season, respectively. Based on the results, the WFM-DOAS SCIAMACHY CO data product used for this study is therefore relevant in the assessment of CO concentrations in developing countries within the low latitudes that could not afford monitoring infrastructure due to the required high costs. Although the SCIAMACHY sensor is no longer available, it provided cost-effective, reliable and accessible data that could support air quality assessment in developing countries.
Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H
2007-11-01
Establishing a comparison standard in neuropsychological assessment is crucial to determining change in function. There is no available method to estimate premorbid intellectual functioning for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV provided normative data for both American and Canadian children aged 6 to 16 years old. This study developed regression algorithms as a proposed method to estimate full-scale intelligence quotient (FSIQ) for the Canadian WISC-IV. Participants were the Canadian WISC-IV standardization sample (n = 1,100). The sample was randomly divided into two groups (development and validation groups). The development group was used to generate regression algorithms; 1 algorithm only included demographics, and 11 combined demographic variables with WISC-IV subtest raw scores. The algorithms accounted for 18% to 70% of the variance in FSIQ (standard error of estimate, SEE = 8.6 to 14.2). Estimated FSIQ significantly correlated with actual FSIQ (r = .30 to .80), and the majority of individual FSIQ estimates were within +/-10 points of actual FSIQ. The demographic-only algorithm was less accurate than algorithms combining demographic variables with subtest raw scores. The current algorithms yielded accurate estimates of current FSIQ for Canadian individuals aged 6-16 years old. The potential application of the algorithms to estimate premorbid FSIQ is reviewed. While promising, clinical validation of the algorithms in a sample of children and/or adolescents with known neurological dysfunction is needed to establish these algorithms as a premorbid estimation procedure.
Behr, Th.M.; Gotthardt, M.; Behe, M. [Marburg Univ. (Germany). Dept. of Nuclear Medicine; Becker, W. [Goettingen Univ. (Germany). Dept. of Nuclear Medicine
2002-04-01
Simple and reliable methodologies for radioiodination of proteins and peptides are described. The labeling systems are easy to assemble, capable of radioiodinating any protein or, with slight modifications, also peptide (molecular mass 1000-300,000) from kBq to GBq levels of activity for use in diagnosis and/or therapy. Furthermore, the procedures are feasible in any nuclear medicine department. Gigabecquerel amounts of activity can be handled safely. The most favored iodination methodology relies on the lodogen system, a mild oxidating agent without reducing agents. Thus, protein degradation is minimized. Labeling yields are between 60 and 90%, and immunoreactivities remain {>=}85%. Other radioiodination methods (chloramine-T, Bolton-Hunter) are described and briefly discussed. (orig.)
An Evaluation of Empirical Bayes's Estimation of Value-Added Teacher Performance Measures
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul N.; Wooldridge, Jeffrey M.
2015-01-01
Empirical Bayes's (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the…
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M.
2014-01-01
Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…
McDonald, Lyman L.; Garner, Gerald W.; Garner, Gerald W.; Amstrup, Steven C.; Laake, Jeffrey L.; Manly, Bryan F.J.; McDonald, Lyman L.; Robertson, Donna G.
1999-01-01
The U.S. Marine Mammal Protection Act (MMPA) and International Agreement on the Conservation of Polar Bears mandate that boundaries and sizes of polar bear (Ursus maritimus) populations be known so they can be managed at optimum sustainable levels. However, data to estimate polar bear numbers for the Chukchi/Bering Sea and Beaufort Sea populations in Alaska are limited. We evaluated aerial line transect methodology for assessing the size of these Alaskan polar bear populations during pilot studies in spring 1987 and summer 1994. In April and May 1987 we flew 12.239 km of transect lines in the northern Bering, Chukchi, and western Beaufort seas. In June 1994 we flew 6.244 km of transect lines in a primary survey unit using a helicopter, and 5,701 km of transect lines in a secondary survey unit using a fixed-wing aircraft in the Beaufort Sea. We examined visibility bias in aerial transect surveys, double counts by independent observers, single-season mark-resight methods, the suitability of using polar bear sign to stratify the study area, and adaptive sampling methods. Fifteen polar bear groups were observed during the 1987 study. Probability of detecting bears decreased with increasing perpendicular distance from the transect line, and probability of detecting polar bear groups likely increased with increasing group size. We estimated population density in high density areas to be 446 km2/bear. In 1994, 15 polar bear groups were observed by independent front and rear seat observers on transect lines in the primary survey unit. Density estimates ranged from 284 km2/bear to 197 km2/bear depending on the model selected. Low polar bear numbers scattered over large areas of polar ice in 1987 indicated that spring is a poor time to conduct aerial surveys. Based on the 1994 survey we determined that ship-based helicopter or land-based fixed-wing aerial surveys conducted at the ice-edge in late summer-early fall may produce robust density estimates for polar bear
Cook, Troy A.
2013-01-01
Estimated ultimate recoveries (EURs) are a key component in determining productivity of wells in continuous-type oil and gas reservoirs. EURs form the foundation of a well-performance-based assessment methodology initially developed by the U.S. Geological Survey (USGS; Schmoker, 1999). This methodology was formally reviewed by the American Association of Petroleum Geologists Committee on Resource Evaluation (Curtis and others, 2001). The EUR estimation methodology described in this paper was used in the 2013 USGS assessment of continuous oil resources in the Bakken and Three Forks Formations and incorporates uncertainties that would not normally be included in a basic decline-curve calculation. These uncertainties relate to (1) the mean time before failure of the entire well-production system (excluding economics), (2) the uncertainty of when (and if) a stable hyperbolic-decline profile is revealed in the production data, (3) the particular formation involved, (4) relations between initial production rates and a stable hyperbolic-decline profile, and (5) the final behavior of the decline extrapolation as production becomes more dependent on matrix storage.
Evans David
2012-10-01
Full Text Available Abstract Background Directed acyclic graphs (DAGs are an effective means of presenting expert-knowledge assumptions when selecting adjustment variables in epidemiology, whereas the change-in-estimate procedure is a common statistics-based approach. As DAGs imply specific empirical relationships which can be explored by the change-in-estimate procedure, it should be possible to combine the two approaches. This paper proposes such an approach which aims to produce well-adjusted estimates for a given research question, based on plausible DAGs consistent with the data at hand, combining prior knowledge and standard regression methods. Methods Based on the relationships laid out in a DAG, researchers can predict how a collapsible estimator (e.g. risk ratio or risk difference for an effect of interest should change when adjusted on different variable sets. Implied and observed patterns can then be compared to detect inconsistencies and so guide adjustment-variable selection. Results The proposed approach involves i. drawing up a set of plausible background-knowledge DAGs; ii. starting with one of these DAGs as a working DAG, identifying a minimal variable set, S, sufficient to control for bias on the effect of interest; iii. estimating a collapsible estimator adjusted on S, then adjusted on S plus each variable not in S in turn (“add-one pattern” and then adjusted on the variables in S minus each of these variables in turn (“minus-one pattern”; iv. checking the observed add-one and minus-one patterns against the pattern implied by the working DAG and the other prior DAGs; v. reviewing the DAGs, if needed; and vi. presenting the initial and all final DAGs with estimates. Conclusion This approach to adjustment-variable selection combines background-knowledge and statistics-based approaches using methods already common in epidemiology and communicates assumptions and uncertainties in a standardized graphical format. It is probably best suited to
Multidisciplinary System Reliability Analysis
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Hocine Mzad
2015-09-01
Full Text Available Several techniques have been developed over time for the measurement of heat and the temperatures generated in various manufacturing processes and tribological applications. Each technique has its own advantages and disadvantages. The appropriate technique for temperature measurement depends on the application under consideration as well as the available tools for measurement. This paper presents a procedure for a simple and accurate determination of the time-varying heat flux at the workpiece–tool interface of three different metals under known cutting conditions. A portable infrared thermometer is used for surface temperature measurements. A spline smoothing interpolation of the surface temperature history enables to determine the local heat flux produced during stock removal. The measured temperature is represented by a third-order spline approximation. Nonetheless, the accuracy of polynomial interpolation depends on how close are the interpolated points; an increase in degree cannot be used to increase the accuracy. Although the data analysis is relatively complicated, the computing time is very small.
Sima, Jingke; Cao, Xinde; Zhao, Ling; Luo, Qishi
2015-11-01
In this study, Pb(NO3)2-, PbSO4-, or PbCO3-contaminated soils were treated with triple super phosphate (TSP) or phosphate rock (PR) and then subjected to the toxicity characteristic leaching procedure (TCLP) to assess Pb leachability. Soluble TSP resulted in the transformation of Pb into insoluble Pb phosphate precipitates in all contaminated soils, and the transformation increased with extended leaching times. Consequently, Pb concentrations in the TCLP leachates treated with TSP were reduced by 97.3-99.7% compared with the untreated soils, and Pb leaching decreased over the extraction time and did not reach equilibrium even after 96 h of extraction. Precipitation of Pb phosphate minerals in the less soluble PR-treated soil was limited, and Pb leaching was controlled by the dissolution of the Pb compounds, resulting in elevation of Pb in the TCLP leachate. Pb leaching continued to increase with time due to continuous dissolution of PbSO4 and PbCO3. The results indicated that Pb leaching is kinetically controlled by either Pb compound dissolution or phosphate mineral formation. The standard TCLP test using a designated 18 h incubation time can overestimate the leachability of Pb in soils contaminated with lead and amended with soluble TSP and underestimate the leachability of Pb in soils contaminated with Pb and amended with less soluble PR. Therefore, wide use of TCLP for assessing Pb leachability in all contaminated soils is insufficient, and development of a site-specific evaluation method is urgently needed.
Lazzaroni, Massimo
2012-01-01
This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be
Barroco, L S A; Freitas, C E C; Lima, Á C
2017-08-17
The effect of catch-and-release fishing on the survival of peacock bass (Cichla spp.) was evaluated by comparing two types of artificial bait (jig and shallow-diver plugs) and two types of post-catch confinement. Two experiments were conducted during the periods January-February and October-November 2012 in the Unini River, a right-bank tributary of the Negro River. In total, 191 peacock bass were captured. Both groups of fish were subjected to experimental confinement (collective and individual) for three days. Additionally, 11 fish were tagged with radio transmitters for telemetry monitoring. Mortality rate was estimated as the percentage of dead individuals for each type of bait and confinement. For peacock bass caught with jig baits, mortality was zero. The corresponding figure for shallow-diver bait was 1.66% for fish in collective containment, 18.18% for fish monitored by telemetry and 0% for individuals confined individually. Our results show low post-release mortality rates for peacock bass. Furthermore, neither the type of confinement nor the type of bait had a statistically significant influence on mortality rates. While future studies could include other factors in the analysis, our results show that catch-and-release fishing results in low mortality rates.
师义民; 魏玲; 肖华勇
2001-01-01
运用最大似然法和经验Bayes方法，研究冷贮备串联系统可靠性指标的估计问题，分别给出 了该系统失效率、可靠度函数与平均寿命的点估计。最后利用随机模拟方法，对两种估计结 果进行了比较，结果表明，经验Bayes估计优于最大似然估计。%When applying FPA (failure probability analysis) to estimating the reliability performances in cold standby series system, it suffers from two shortcomings: (1) the parameters of life distribution for all elements must be known as constants ; (2) FPA is not quite satisfactory for small sample case. We overcome these two shortcomings by experimenting with EBE (empirical Bayes es timation) and MLE (maximum likelihood estimation). Before EBE can be used, we have to obtain first the Bayes estimation of the reliability performances for cold standby series system. Then we obtain the estimated reliablity performences by EBE and MLE. At last, we compare the MLE results with the EBE results by means of Monte-Carlo simulation. The results show that the accuracy of the EBE is better than that of the MLE. The method proposed in this paper can be used to analyze the reliabilities of cold standby series systems in mechanical and electrical appliances. This work rema ins to be done.
Kuhn, Gerhard; Nickless, R.C.
1994-01-01
Part of the storage space of Pueblo Reservoir consists of a 65,950 acre-foot joint-use pool (JUP) that can be used to provide additional conservation capacity from November 1 to April 14; however, the JUP must be evacuated by April 15 and used only for flood-control capacity until November 1. A study was completed to determine if the JUP possibly could be used for conservation storage for any number of days from April 15 through May 14 under certain hydrologic conditions. The methods of the study were: (1) Frequency analysis of recorded daily mean discharge data for streamflow-gaging stations upstream and downstream from Pueblo Reservoir, and (2) Implementation of the extended streamflow prediction (ESP) procedure for the Arkansas River basin upstream from the reservoir. The frequency analyses enabled estimation of daily discharges at selected exceedance probabilities (EP's), including the 0.01 EP that was used in design of the flood- storage capacity of Pueblo Reservoir. The ESP procedure enabled probabilistic forecasts of inflow volume to the reservoir for April 15 through May 14. Daily discharges derived from the frequency analyses were routed through Pueblo Reservoir to estimate evacuation dates of the JUP for different reservoir inflow volumes; the estimates indicated a relation between the inflow volume and the JUP evacuation date. To apply the study results, only a ESP forecast of the April 15-May 14 reservoir inflow volume is needed. Study results indicate the JUP possibly could be used as late as May 5 depending on the forecast inflow volume.
Ramos, J., Jr.; Chapman, E. J.; Weller, N.; Susanto, P.; Childers, D. L.
2014-12-01
Constructed wetland systems (CWS) have been developed to remove nutrients from secondarily treated water, but little is known about their long-term contributions on greenhouse gas emissions (GHG), especially in arid regions. To increase our knowledge of ecosystem dynamics of CWS in arid regions, we are investigating N2O, CH4, and CO2 fluxes from a system perspective, a vegetated-shoreline to open-water gradient, and from the wetland plant Typha spp. From 2012 to 2014, we utilized the floating chamber technique to collect fluxes from two transects (nearest to inflow and nearest to outflow) and along two gradient subsites (shoreline and open-water) within the transects. Recently, we began collecting direct fluxes from the vegetation by deploying gas chambers on Typha spp. Fluxes were analyzed using the HMR procedure (Package HMR in R) developed for trace-gas flux estimations when using static chambers. We found significantly higher CH4 and CO2 fluxes in the summer and spring compared to fall and winter months. From the whole system perspective, we found significantly greater CO2 fluxes at the inflow compared to the outflow transect. From the shoreline to open-water gradient, N2O fluxes were significantly greater in the open-water and, CH4 fluxes where significantly greater in the vegetated shoreline subsite. These differences may be explained by the presence of vegetation, differences of water column height, or higher nitrate levels in the open-water compared to the shoreline. Results from the vegetation chambers will be presented from two heights of the Typha spp. leaves from plants in each of the four subsites. The analysis of the 288 fluxes using two HMR procedures, default classification (linear, non-linear, and no flux) and the linear regression, resulted in similar seasonal and spatial patterns in the flux estimates. However, the default classification calculated on average 31% for N2O, 67% for CH4, and 34% for CO2 higher flux estimates relative to the fluxes
Garrett, Robert G.; Hall, G.E.M.; Vaive, J.E.; Pelchat, P.
2009-01-01
An objective of the North American Soil Geochemical Landscapes Project is to provide relevant data concerning bioaccessible concentrations of elements in soil to government and other institutions undertaking environmental studies. A protocol was developed that employs a 1-g soil sample agitated overnight with 40 mL of reverse-osmosis de-ionized water for 20 h, and determination of 63 elements following three steps of centrifugation by inductively coupled plasma–atomic emission spectrometry and inductively coupled plasma–mass spectrometry the following day. Statistical summaries are presented for those 48 elements (Ag, Al, As, B, Ba, Be, Br, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Er, Eu, Fe, Ga, Gd, Ge, Hf, Ho, I, K, La, Li, Lu, Mg, Mn, Mo, Na, Nb, Nd, Ni, P, Pb, Pr, Rb, Re, S, Sb, Si, Sm, Sn, Sr, Tb, Ti, Tl, Tm, U, V, W, Y, Yb, Zn, Zr, and pH) for which nd the other from northern Manitoba to the USA–Mexico border. The spatial distribution of three selected elements (Ca, Cu, and Pb) along the two transects is discussed in this paper both as absolute amounts liberated by the leach and expressed as a percentage of the total, or near-total, amounts determined for the elements. The Ca data reflect broad trends in soil parent materials, their weathering, and subsequent soil development. Calcium concentrations are generally found to be lower in the older soils of the eastern USA. The Cu data are higher in the eastern half of the USA, correlating with soil organic C, with which it is sequestered. The Pb data exhibit little regional variability due to natural sources, but are influenced by anthropogenic sources. Based on the Pb results, the percentage water-extractable data demonstrate promise as a tool for identifying anthropogenic components. The soil–water partition (distribution) coefficients, Kds (L/kg), were determined and their relevance to estimating bioaccessible amounts of elements to soil fauna and flora is discussed. Finally, a possible link between W
The Validity of Reliability Measures.
Seddon, G. M.
1988-01-01
Demonstrates that some commonly used indices can be misleading in their quantification of reliability. The effects are most pronounced on gain or difference scores. Proposals are made to avoid sources of invalidity by using a procedure to assess reliability in terms of upper and lower limits for the true scores of each examinee. (Author/JDH)
叶宝娟; 温忠麟
2012-01-01
Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the