Wang, Guoyu; Houkes, Zweitze; Ji, Guangrong; Zheng, Bing; Li, Xin
2003-01-01
This paper presents a new algorithm for estimation-based range image segmentation. Aiming at surface-primitive extraction from range data, we focus on the reliability of the primitive representation in the process of region estimation. We introduce an optimal description of surface primitives, by wh
Is visual estimation of passive range of motion in the pediatric lower limb valid and reliable
Dagher Fernand
2009-10-01
Full Text Available Abstract Background Visual estimation (VE is an essential tool for evaluation of range of motion. Few papers discussed its validity in children orthopedics' practice. The purpose of our study was to assess validity and reliability of VE for passive range of motions (PROMs of children's lower limbs. Methods Fifty typically developing children (100 lower limbs were examined. Visual estimations for PROMs of hip (flexion, adduction, abduction, internal and external rotations, knee (flexion and popliteal angle and ankle (dorsiflexion and plantarflexion were made by a pediatric orthopaedic surgeon (POS and a 5th year resident in orthopaedics. A last year medical student did goniometric measurements. Three weeks later, same measurements were performed to assess reliability of visual estimation for each examiner. Results Visual estimations of the POS were highly reliable for hip flexion, hip rotations and popliteal angle (ρc ≥ 0.8. Reliability was good for hip abduction, knee flexion, ankle dorsiflexion and plantarflexion (ρc ≥ 0.7 but poor for hip adduction (ρc = 0.5. Reproducibility for all PROMs was verified. Resident's VE showed high reliability (ρc ≥ 0.8 for hip flexion and popliteal angle. Good correlation was found for hip rotations and knee flexion (ρc ≥ 0.7. Poor results were obtained for ankle PROMs (ρc Conclusion Accuracy of VE of passive hip flexion and knee PROMs is high regardless of the examiner's experience. Same accuracy can be found for hip rotations and abduction whenever VE is performed by an experienced examiner. Goniometric evaluation is recommended for passive hip adduction and for ankle PROMs.
Reliability estimation using kriging metamodel
Cho, Tae Min; Ju, Byeong Hyeon; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.
Estimation of Bridge Reliability Distributions
Thoft-Christensen, Palle
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...
Bayesian Missile System Reliability from Point Estimates
2014-10-28
OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Bayesian Missile System Reliability from Point Estimates 5a. CONTRACT...Principle (MEP) to convert point estimates to probability distributions to be used as priors for Bayesian reliability analysis of missile data, and...illustrate this approach by applying the priors to a Bayesian reliability model of a missile system. 15. SUBJECT TERMS priors, Bayesian , missile
Reliabilities of genomic estimated breeding values in Danish Jersey
Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng;
2012-01-01
In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...... index pre-selection only. Averaged across traits, the estimates of reliability of DGVs ranged from 0.20 for validation on the most recent 3 years of bulls and up to 0.42 for expected reliabilities. Reliabilities from the cross-validation were on average 0.24. For the individual traits, the reliability...
Reliability Estimates for Power Supplies
Lee C. Cadwallader; Peter I. Petersen
2005-09-01
Failure rates for large power supplies at a fusion facility are critical knowledge needed to estimate availability of the facility or to set priorties for repairs and spare components. A study of the "failure to operate on demand" and "failure to continue to operate" failure rates has been performed for the large power supplies at DIII-D, which provide power to the magnet coils, the neutral beam injectors, the electron cyclotron heating systems, and the fast wave systems. When one of the power supplies fails to operate, the research program has to be either temporarily changed or halted. If one of the power supplies for the toroidal or ohmic heating coils fails, the operations have to be suspended or the research is continued at de-rated parameters until a repair is completed. If one of the power supplies used in the auxiliary plasma heating systems fails the research is often temporarily changed until a repair is completed. The power supplies are operated remotely and repairs are only performed when the power supplies are off line, so that failure of a power supply does not cause any risk to personnel. The DIII-D Trouble Report database was used to determine the number of power supply faults (over 1,700 reports), and tokamak annual operations data supplied the number of shots, operating times, and power supply usage for the DIII-D operating campaigns between mid-1987 and 2004. Where possible, these power supply failure rates from DIII-D will be compared to similar work that has been performed for the Joint European Torus equipment. These independent data sets support validation of the fusion-specific failure rate values.
A new simulation estimator of system reliability
Sheldon M. Ross
1994-01-01
Full Text Available A basic identity is proven and applied to obtain new simulation estimators concerning (a system reliability, (b a multi-valued system. We show that the variance of this new estimator is often of the order α2 when the usual raw estimator has variance of the order α and α is small. We also indicate how this estimator can be combined with standard variance reduction techniques of antithetic variables, stratified sampling and importance sampling.
Mission Reliability Estimation for Repairable Robot Teams
Stephen B. Stancliff
2008-11-01
Full Text Available Many of the most promising applications for mobile robots require very high reliability. The current generation of mobile robots is, for the most part, highly unreliable. The few mobile robots that currently demonstrate high reliability achieve this reliability at a high financial cost. In order for mobile robots to be more widely used, it will be necessary to find ways to provide high mission reliability at lower cost. Comparing alternative design paradigms in a principled way requires methods for comparing the reliability of different robot and robot team configurations. In this paper, we present the first principled quantitative method for performing mission reliability estimation for mobile robot teams. We also apply this method to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Using conservative estimates of the cost-reliability relationship, our results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares.
The reliability of DSM impact estimates
Vine, E.L. [Lawrence Berkeley Lab., CA (United States); Kushler, M.G. [Michigan Public Service Commission, Lansing, MI (United States)
1995-05-01
Demand-side management (DSM) critics continue to question the reliability of DSM program savings, and therefore, the need for funding such programs. In this paper, the authors examine the issues underlying the discussion of reliability of DSM program savings (e.g., bias and precision) and compare the levels of precision of DSM impact estimates for three utilities. Overall, the precision results from all three companies appear quite similar and, for the most part, demonstrate reasonably good precision levels around DSM savings estimate. The conclude by recommending activities for program managers and evaluators for increasing the understanding of the factors leading to DSM uncertainty and for reducing the level of DSM uncertainty.
Adaptive Response Surface Techniques in Reliability Estimation
Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard
1993-01-01
Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces deter...
Reliability estimates for flawed mortar projectile bodies
Cordes, J.A. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)], E-mail: jennifer.cordes@us.army.mil; Thomas, J.; Wong, R.S.; Carlucci, D. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)
2009-12-15
The Army routinely screens mortar projectiles for defects in safety-critical parts. In 2003, several lots of mortar projectiles had a relatively high defect rate, 0.24%. Before releasing the projectiles, the Army reevaluated the chance of a safety-critical failure. Limit state functions and Monte Carlo simulations were used to estimate reliability. Measured distributions of wall thickness, defect rate, material strength, and applied loads were used with calculated stresses to estimate the probability of failure. The results predicted less than one failure in one million firings. As of 2008, the mortar projectiles have been used without any safety-critical incident.
MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.
Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne
2014-01-01
When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement
Estimating a municipal water supply reliability
O.G. Okeola
2015-12-01
Full Text Available The availability and adequacy of water in a river basin determine the design of water resources projects such as water supply. There is a further need to regularly appraise availability of such resource for municipality at a distant future to help in articulating contingent plan to handle its vulnerability. This paper attempts to empirically determine the reliability of water resource for a municipal water supply. An approach was first developed to estimate municipality water demand that lack socioeconometric data using a purpose-specific model. Hydrological assessment of river Oyun basin was carried out using Markov model and sequent peak analysis to determine the reliability extent for the future demand need. The two models were then applied to Offa municipality in Kwara state, Nigeria. The finding revealed the reliability and adequacy of the resource up till year 2020. The need to start exploring a well-coordinated conjunctive use of resources is recommended. The study can serve as an organized baseline for future work that will consider physiographic characteristics of the basin and climatic dynamics. The findings can be a vital input into the demand management process for long-term sustainable water supply of the town and by extension to urban township with similar characteristic.
Position estimation from range only measurements
Alleyne, Jason C.
2000-01-01
Approved for public release, distribution is unlimited In order for a team of several Automated Underwater Vehicles (AUVs), such as the ARIES, to operate cooperatively, operators require a cost effective position estimation method. Range only measurement (ROM) position estimation provides this and a means for the AUVs to identify each other's position. Position estimation usually requires at least two range measurements from known points to solve for a vessel's position. However, under cer...
DISTRIBUTED MONITORING SYSTEM RELIABILITY ESTIMATION WITH CONSIDERATION OF STATISTICAL UNCERTAINTY
Yi Pengxing; Yang Shuzi; Du Runsheng; Wu Bo; Liu Shiyuan
2005-01-01
Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.
Optical range and range rate estimation for teleoperator systems
Shields, N. L., Jr.; Kirkpatrick, M., III; Malone, T. B.; Huggins, C. T.
1974-01-01
Range and range rate are crucial parameters which must be available to the operator during remote controlled orbital docking operations. A method was developed for the estimation of both these parameters using an aided television system. An experiment was performed to determine the human operator's capability to measure displayed image size using a fixed reticle or movable cursor as the television aid. The movable cursor was found to yield mean image size estimation errors on the order of 2.3 per cent of the correct value. This error rate was significantly lower than that for the fixed reticle. Performance using the movable cursor was found to be less sensitive to signal-to-noise ratio variation than was that for the fixed reticle. The mean image size estimation errors for the movable cursor correspond to an error of approximately 2.25 per cent in range suggesting that the system has some merit. Determining the accuracy of range rate estimation using a rate controlled cursor will require further experimentation.
Lower bounds to the reliabilities of factor score estimators
Hessen, D.J.|info:eu-repo/dai/nl/256041717
2017-01-01
Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score
Range-based estimation of quadratic variation
Christensen, Kim; Podolskij, Mark
This paper proposes using realized range-based estimators to draw inference about the quadratic variation of jump-diffusion processes. We also construct a range-based test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient...
Estimation of the Reliability of Distributed Applications
Marian Pompiliu CRISTESCU; Laurentiu CIOVICA
2010-01-01
In this paper the reliability is presented as an important feature for use in mission-critical distributed applications. Certain aspects of distributed systems make the requested level of reliability more difficult. An obvious benefit of distributed systems is that they serve the global business and social environment in which we live and work. Another benefit is that they can improve the quality of services, in terms of reliability, availability and performance, for the complex systems. The ...
A Note on Structural Equation Modeling Estimates of Reliability
Yang, Yanyun; Green, Samuel B.
2010-01-01
Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…
Hardware and software reliability estimation using simulations
Swern, Frederic L.
1994-01-01
The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.
Range-based estimation of quadratic variation
Christensen, Kim; Podolskij, Mark
In this paper, we propose using realized range-based estimation to draw inference about the quadratic variation of jump-diffusion processes. We also construct a new test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient, the test...
Simulator for Software Project Reliability Estimation
Sanjana,
2011-07-01
Full Text Available Several models are there for software development processes, each describing approaches to a variety of tasks or activities that take place during the process. Without project management, softwareprojects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.IEEE defines reliability as “the ability of a system to perform its required function under stated conditions for a specified period of time. To most software project managers, reliability is equated to correctness that is number of bugs found and fixed. The purpose is to develop a simulator forestimating the reliability of the software project using PERT approach keeping in view the criticality index of each task.
Reliability estimation in a multilevel confirmatory factor analysis framework.
Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J
2014-03-01
Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.
Reliability Estimation for Double Containment Piping
L. Cadwallader; T. Pinna
2012-08-01
Double walled or double containment piping is considered for use in the ITER international project and other next-generation fusion device designs to provide an extra barrier for tritium gas and other radioactive materials. The extra barrier improves confinement of these materials and enhances safety of the facility. This paper describes some of the design challenges in designing double containment piping systems. There is also a brief review of a few operating experiences of double walled piping used with hazardous chemicals in different industries. This paper recommends approaches for the reliability analyst to use to quantify leakage from a double containment piping system in conceptual and more advanced designs. The paper also cites quantitative data that can be used to support such reliability analyses.
Range-based estimation of quadratic variation
Christensen, Kim; Podolskij, Mark
This paper proposes using realized range-based estimators to draw inference about the quadratic variation of jump-diffusion processes. We also construct a range-based test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient, the ......, the test is well-sized and more powerful than a return-based t-statistic for sampling frequencies normally used in empirical work. Applied to equity data, we show that the intensity of the jump process is not as high as previously reported....
Range-based estimation of quadratic variation
Christensen, Kim; Podolskij, Mark
In this paper, we propose using realized range-based estimation to draw inference about the quadratic variation of jump-diffusion processes. We also construct a new test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient, the te...... is well-sized and more powerful than a return-based t-statistic for sampling frequencies normally used in empirical work. Applied to equity data, we find that the intensity of the jump process is not as high as previously reported....
Reliability of fish size estimates obtained from multibeam imaging sonar
Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.
2013-01-01
Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄ = −8.34, SE = 2.39) and white perch (x̄ = 14.48, SE = 3.99) but not striped bass (x̄ = 3.71, SE = 2.58) or channel catfish (x̄ = 3.97, SE = 5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of
A Latent Class Approach to Estimating Test-Score Reliability
van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas
2011-01-01
This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…
IRT-Estimated Reliability for Tests Containing Mixed Item Formats
Shu, Lianghua; Schwarz, Richard D.
2014-01-01
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-01-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with correspond
Singularity of Some Software Reliability Models and Parameter Estimation Method
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
SA BASED SOFTWARE DEPLOYMENT RELIABILITY ESTIMATION CONSIDERING COMPONENT DEPENDENCE
Su Xihong; Liu Hongwei; Wu Zhibo; Yang Xiaozong; Zuo Decheng
2011-01-01
Reliability is one of the most critical properties of software system.System deployment architecture is the allocation of system software components on host nodes.Software Architecture (SA)based software deployment models help to analyze reliability of different deployments.Though many approaches for architecture-based reliability estimation exist,little work has incorporated the influence of system deployment and hardware resources into reliability estimation.There are many factors influencing system deployment.By translating the multi-dimension factors into degree matrix of component dependence,we provide the definition of component dependence and propose a method of calculating system reliability of deployments.Additionally,the parameters that influence the optimal deployment may change during system execution.The existing software deployment architecture may be ill-suited for the given environment,and the system needs to be redeployed to improve reliability.An approximate algorithm,A*_D,to increase system reliability is presented.When the number of components and host nodes is relative large,experimental results show that this algorithm can obtain better deployment than stochastic and greedy algorithms.
Maximized Reliability Estimates for Some Research Scales of the MMPI.
Wagner, Edwin E.; And Others
1990-01-01
This study, using data for 200 psychiatric/chemical dependency patients, attempted to justify subscales of the Minnesota Multiphasic Personality Inventory (MMPI). Distributions of all possible split-half correlations for certain research scales of the MMPI revealed negative skewness resulting in spuriously lowered reliability estimates. The scales…
Estimating the Reliability of a Test Containing Multiple Item Formats.
Qualls, Audrey L.
1995-01-01
Classically parallel, tau-equivalently parallel, and congenerically parallel models representing various degrees of part-test parallelism and their appropriateness for tests composed of multiple item formats are discussed. An appropriate reliability estimate for a test with multiple item formats is presented and illustrated. (SLD)
Sequential Bayesian technique: An alternative approach for software reliability estimation
S Chatterjee; S S Alam; R B Misra
2009-04-01
This paper proposes a sequential Bayesian approach similar to Kalman ﬁlter for estimating reliability growth or decay of software. The main advantage of proposed method is that it shows the variation of the parameter over a time, as new failure data become available. The usefulness of the method is demonstrated with some real life data
Parameter estimation and reliable fault detection of electric motors
Dusan PROGOVAC; Le Yi WANG; George YIN
2014-01-01
Accurate model identification and fault detection are necessary for reliable motor control. Motor-characterizing parameters experience substantial changes due to aging, motor operating conditions, and faults. Consequently, motor parameters must be estimated accurately and reliably during operation. Based on enhanced model structures of electric motors that accommodate both normal and faulty modes, this paper introduces bias-corrected least-squares (LS) estimation algorithms that incorporate functions for correcting estimation bias, forgetting factors for capturing sudden faults, and recursive structures for efficient real-time implementation. Permanent magnet motors are used as a benchmark type for concrete algorithm development and evaluation. Algorithms are presented, their properties are established, and their accuracy and robustness are evaluated by simulation case studies under both normal operations and inter-turn winding faults. Implementation issues from different motor control schemes are also discussed.
Objectivity, Reliability, and Validity of Search Engine Count Estimates
Dietmar Janetzko
2008-01-01
Full Text Available Count estimates ("hits" provided by Web search engines have received much attention as a yardstick to measure a variety of phenomena of interest as diverse as, e.g., language statistics, popularity of authors, or similarity between words. Common to these activities is the intention to use Web search engines not only for search but for ad hoc measurement. Using search engine count estimates (SECEs in this way means that a phenomenon of interest, e.g., the popularity of an author, is conceived of as a measurand, and SECEs are taken to be its quantitative measures. However, the data quality of SECEs has not yet been studied systematically, and concerns have been raised against the use of this kind of data. This article examines the data quality of SECEs focusing on classical goodness criteria, i.e., objectivity, reliability, and validity. The results of a series of studies indicate that with the exception of Boolean queries that use disjunction or negation objectivity as well as test-retest reliability and parallel-test reliability of SECEs is good for most types of browsers and search engines examined. Estimation of validity required model development (all-subsets regression revealing satisfying results by using an explorative approach to feature selection. The ﬁndings are discussed in the light of previous objections and perspectives for using Web search count estimates are delineated.
16 CFR 309.22 - Determining estimated cruising range.
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Determining estimated cruising range. 309.22... Fueled Vehicles § 309.22 Determining estimated cruising range. (a) Dedicated vehicles. (1) Estimated cruising range values for dedicated vehicles required to comply with the provisions of 40 CFR part 600...
Estimated occupied range of the lesser prairie-chicken
US Fish and Wildlife Service, Department of the Interior — Shown are the current estimated occupied range and the historical range of the Lesser Prairie-Chicken. The current range was updated in January 2011 by the Lesser...
Interobserver reproducibility of the visual estimation of range of motion of the shoulder
Terwee, C.B.; Winter, de A.F.; Scholten, R.J.P.M.; Jans, M.P.; Deville, W.L.J.M.; Schaardenburg, van D.; Bouter, L.M.
2005-01-01
Abstract Terwee CB, de Winter AF, Scholten RJ, Jans MP, Deville W, van Schaardenburg D, Bouter LM. Interobserver reproducibility of the visual estimation of range of motion of the shoulder. Objectives To assess interobserver reproducibility (agreement and reliability) of visually estimated shoulder
Probabilistic confidence for decisions based on uncertain reliability estimates
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Availability and Reliability of FSO Links Estimated from Visibility
M. Tatarko
2012-06-01
Full Text Available This paper is focused on estimation availability and reliability of FSO systems. Shortcut FSO means Free Space Optics. It is a system which allows optical transmission between two steady points. We can say that it is a last mile communication system. It is an optical communication system, but the propagation media is air. This solution of last mile does not require expensive optical fiber and establishing of connection is very simple. But there are some drawbacks which have a bad influence of quality of services and availability of the link. Number of phenomena in the atmosphere such as scattering, absorption and turbulence cause a large variation of receiving optical power and laser beam attenuation. The influence of absorption and turbulence can be significantly reduced by an appropriate design of FSO link. But the visibility has the main influence on quality of the optical transmission channel. Thus, in typical continental area where rain, snow or fog occurs is important to know their values. This article gives a description of device for measuring weather conditions and information about estimation of availability and reliability of FSO links in Slovakia.
Reliable estimation of orbit errors in spaceborne SAR interferometry
Bähr, H.; Hanssen, R.F.
2012-01-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of
Estimated home ranges can misrepresent habitat relationships on patchy landscapes
Mitchell, M.S.; Powell, R.A.
2008-01-01
Home ranges of animals are generally structured by the selective use of resource-bearing patches that comprise habitat. Based on this concept, home ranges of animals estimated from location data are commonly used to infer habitat relationships. Because home ranges estimated from animal locations are largely continuous in space, the resource-bearing patches selected by an animal from a fragmented distribution of patches would be difficult to discern; unselected patches included in the home range estimate would bias an understanding of important habitat relationships. To evaluate potential for this bias, we generated simulated home ranges based on optimal selection of resource-bearing patches across a series of simulated resource distributions that varied in the spatial continuity of resources. For simulated home ranges where selected patches were spatially disjunct, we included interstitial, unselected cells most likely to be traveled by an animal moving among selected patches. We compared characteristics of the simulated home ranges with and without interstitial patches to evaluate how insights derived from field estimates can differ from actual characteristics of home ranges, depending on patchiness of landscapes. Our results showed that contiguous home range estimates could lead to misleading insights on the quality, size, resource content, and efficiency of home ranges, proportional to the spatial discontinuity of resource-bearing patches. We conclude the potential bias of including unselected, largely irrelevant patches in the field estimates of home ranges of animals can be high, particularly for home range estimators that assume uniform use of space within home range boundaries. Thus, inferences about the habitat relationships that ultimately define an animal's home range can be misleading where animals occupy landscapes with patchily distributed resources.
Estimation of Small s-t Reliabilities in Acyclic Networks
Laumanns, Marco
2007-01-01
In the classical s-t network reliability problem a fixed network G is given including two designated vertices s and t (called terminals). The edges are subject to independent random failure, and the task is to compute the probability that s and t are connected in the resulting network, which is known to be #P-complete. In this paper we are interested in approximating the s-t reliability in case of a directed acyclic original network G. We introduce and analyze a specialized version of the Monte-Carlo algorithm given by Karp and Luby. For the case of uniform edge failure probabilities, we give a worst-case bound on the number of samples that have to be drawn to obtain an epsilon-delta approximation, being sharper than the original upper bound. We also derive a variance reduction of the estimator which reduces the expected number of iterations to perform to achieve the desired accuracy when applied in conjunction with different stopping rules. Initial computational results on two types of random networks (direc...
Adie Sam
2011-04-01
Full Text Available Abstract Background The clinimetric properties of knee goniometry are essential to appreciate in light of its extensive use in the orthopaedic and rehabilitative communities. Intra-observer reliability is thought to be satisfactory, but the validity and inter-rater reliability of knee goniometry often demonstrate unacceptable levels of variation. This study tests the validity and reliability of measuring knee range of motion using goniometry and photographic records. Methods Design: Methodology study assessing the validity and reliability of one method ('Marker Method' which uses a skin marker over the greater trochanter and another method ('Line of Femur Method' which requires estimation of the line of femur. Setting: Radiology and orthopaedic departments of two teaching hospitals. Participants: 31 volunteers (13 arthritic and 18 healthy subjects. Knee range of motion was measured radiographically and photographically using a goniometer. Three assessors were assessed for reliability and validity. Main outcomes: Agreement between methods and within raters was assessed using concordance correlation coefficient (CCCs. Agreement between raters was assessed using intra-class correlation coefficients (ICCs. 95% limits of agreement for the mean difference for all paired comparisons were computed. Results Validity (referenced to radiographs: Each method for all 3 raters yielded very high CCCs for flexion (0.975 to 0.988, and moderate to substantial CCCs for extension angles (0.478 to 0.678. The mean differences and 95% limits of agreement were narrower for flexion than they were for extension. Intra-rater reliability: For flexion and extension, very high CCCs were attained for all 3 raters for both methods with slightly greater CCCs seen for flexion (CCCs varied from 0.981 to 0.998. Inter-rater reliability: For both methods, very high ICCs (min to max: 0.891 to 0.995 were obtained for flexion and extension. Slightly higher coefficients were obtained
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-08-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature ( T max) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative distribution functions of the CDOCE and T max as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase in the probability of T max being greater than the resin degradation temperature. A similar trend is also seen for the probability of the CDOCE being less than 0.8.
Is biomass a reliable estimate of plant fitness?1
Younginger, Brett S.; Sirová, Dagmara; Cruzan, Mitchell B.; Ballhorn, Daniel J.
2017-01-01
The measurement of fitness is critical to biological research. Although the determination of fitness for some organisms may be relatively straightforward under controlled conditions, it is often a difficult or nearly impossible task in nature. Plants are no exception. The potential for long-distance pollen dispersal, likelihood of multiple reproductive events per inflorescence, varying degrees of reproductive growth in perennials, and asexual reproduction all confound accurate fitness measurements. For these reasons, biomass is frequently used as a proxy for plant fitness. However, the suitability of indirect fitness measurements such as plant size is rarely evaluated. This review outlines the important associations between plant performance, fecundity, and fitness. We make a case for the reliability of biomass as an estimate of fitness when comparing conspecifics of the same age class. We reviewed 170 studies on plant fitness and discuss the metrics commonly employed for fitness estimations. We find that biomass or growth rate are frequently used and often positively associated with fecundity, which in turn suggests greater overall fitness. Our results support the utility of biomass as an appropriate surrogate for fitness under many circumstances, and suggest that additional fitness measures should be reported along with biomass or growth rate whenever possible. PMID:28224055
Estimating range of influence in case of missing spatial data
Bihrmann, Kristine; Ersbøll, Annette K
2015-01-01
BACKGROUND: The range of influence refers to the average distance between locations at which the observed outcome is no longer correlated. In many studies, missing data occur and a popular tool for handling missing data is multiple imputation. The objective of this study was to investigate how...... the estimated range of influence is affected when 1) the outcome is only observed at some of a given set of locations, and 2) multiple imputation is used to impute the outcome at the non-observed locations. METHODS: The study was based on the simulation of missing outcomes in a complete data set. The range...... of influence was estimated from a logistic regression model with a spatially structured random effect, modelled by a Gaussian field. Results were evaluated by comparing estimates obtained from complete, missing, and imputed data. RESULTS: In most simulation scenarios, the range estimates were consistent...
Numerical Model based Reliability Estimation of Selective Laser Melting Process
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Reliability analysis of onboard laser ranging systems for control systems by movement of spacecraft
E. I. Starovoitov
2014-01-01
Full Text Available The purpose of this paper is to study and find the ways to improve the reliability of onboard laser ranging system (LRS used to control the spacecraft rendezvous and descent. The onboard LRS can be implemented with optical-mechanical scanner and without it. The paper analyses the key factors, which influence on the reliability of both LRS. Reliability of LRS is pretty much defined by the reliability of the laser source and its radiation mode. Solid-state diode-pumped lasers are primarily used as a radiation source. The radiation mode, which is defined by requirements for measurement errors of range and speed affect their reliability. The basic assumption is that the resource of solid state lasers is determined by the number pulses of pumping diodes. The paper investigates the influence of radiation mode of solid-state laser on the reliability function when measuring a passive spacecraft rendezvous dosing velocity using a differential method. With the measurement error, respectively, 10 m for range and 0.6 m/s for velocity a reliability function of 0.99 has been achieved. Reducing the measurement error of velocity to 0.5 m/s either results in reduced reliability function <0.99 or it is necessary to reduce the initial error of measurement range up to 3.5...5 m to correspond to the reliability function ≥ 0.995. For the optomechanical scanner-based LRS the maximum pulse repetition frequency versus the range has been obtained. This dependence has been used as a basis to define the reliability function. The paper investigates the influence of moving parts on the reliability of scanning LRS with sealed or unsealed optomechanical unit. As a result, it has been found that the exception of moving parts is justified provided that manufacturing the sealed optomechanical LRS unit is impossible. In this case, the reliability function increases from 0.99 to 0.9999. When sealing the opto-mechanical unit, the same increase in reliability is achieved through
Reliability estimation for single-unit ceramic crown restorations.
Lekesiz, H
2014-09-01
The objective of this study was to evaluate the potential of a survival prediction method for the assessment of ceramic dental restorations. For this purpose, fast-fracture and fatigue reliabilities for 2 bilayer (metal ceramic alloy core veneered with fluorapatite leucite glass-ceramic, d.Sign/d.Sign-67, by Ivoclar; glass-infiltrated alumina core veneered with feldspathic porcelain, VM7/In-Ceram Alumina, by Vita) and 3 monolithic (leucite-reinforced glass-ceramic, Empress, and ProCAD, by Ivoclar; lithium-disilicate glass-ceramic, Empress 2, by Ivoclar) single posterior crown restorations were predicted, and fatigue predictions were compared with the long-term clinical data presented in the literature. Both perfectly bonded and completely debonded cases were analyzed for evaluation of the influence of the adhesive/restoration bonding quality on estimations. Material constants and stress distributions required for predictions were calculated from biaxial tests and finite element analysis, respectively. Based on the predictions, In-Ceram Alumina presents the best fast-fracture resistance, and ProCAD presents a comparable resistance for perfect bonding; however, ProCAD shows a significant reduction of resistance in case of complete debonding. Nevertheless, it is still better than Empress and comparable with Empress 2. In-Ceram Alumina and d.Sign have the highest long-term reliability, with almost 100% survivability even after 10 years. When compared with clinical failure rates reported in the literature, predictions show a promising match with clinical data, and this indicates the soundness of the settings used in the proposed predictions. © International & American Associations for Dental Research.
Estimating range of influence in case of missing spatial data
Bihrmann, Kristine; Ersbøll, Annette Kjær
2015-01-01
the estimated range of influence is affected when 1) the outcome is only observed at some of a given set of locations, and 2) multiple imputation is used to impute the outcome at the non-observed locations. METHODS: The study was based on the simulation of missing outcomes in a complete data set. The range...
Reliability Estimations of Control Systems Effected by Several Interference Sources
DengBei-xing; JiangMing-hu; LiXing
2003-01-01
In order to establish the sufficient and necessary condition that arbitrarily reliable systems can not be constructed with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of interference sources are raised respectively: interference source possessing real input vectors and constant reliable interferen cesource. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbitrarily reliable systems can not be constructed with the elements effected by above interference sources.
Lunden, Jason B; Muffenbier, Mike; Giveans, M Russell; Cieminski, Cort J
2010-09-01
Clinical measurement, reliability. To compare intrarater and interrater reliability of shoulder internal rotation (IR) passive range of motion measurements utilizing a standard supine position and a sidelying position. Glenohumeral IR range of motion deficits are often noted in patients with shoulder pathology. Excellent intrarater reliability has been found when measuring this motion. However, interrater reliability has been reported as poor to fair. Some clinicians currently use a sidelying position for IR stretching with patients who have shoulder pathology. However, no objective data exist for IR passive range of motion measured in this sidelying position, either in terms of reliability or normative values. Seventy subjects (mean age, 36.8 years), with (n = 19) and without (n = 51) shoulder pathology, were included in this study. Shoulder IR passive range of motion of the dominant shoulder or involved shoulder was measured by 2 investigators in 2 positions: (1) a standard supine position, with the shoulder at 90 degrees of abduction, and (2) in sidelying on the tested side, with the shoulder flexed to 90 degrees . Intrarater reliability for supine measurements was good to excellent (ICC3,1 = 0.70-0.93) and for sidelying measurements was excellent (ICC3,1 = 0.94-0.98). Interrater reliability was fair to good for the supine measurement (ICC2,2 = 0.74-0.81) and good to excellent for the sidelying measurement (ICC2,2 = 0.88-0.96). The mean (range) value of the dominant shoulder sidelying IR passive range of motion was 40 degrees (11 degrees to 69 degrees ) for healthy subjects and 25 degrees (-16 degrees to 49 degrees) for subjects with shoulder pathology. For subjects with shoulder pathology, measurements of shoulder IR made in the sidelying position had superior intrarater and interrater reliability compared to those in the standard supine position.
Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.
Proppe, Jonny; Reiher, Markus
2017-07-11
One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M
Test-Retest Reliability of the Dual-Microphone Voice Range Profile.
Printz, Trine; Sorensen, Jesper Roed; Godballe, Christian; Grøntved, Ågot Møller
2017-05-16
The voice range profile (VRP) measures vocal intensity and fundamental frequency. Phonosurgical and logopedic treatment outcome studies using the VRP report voice improvements of 3-6 semitones (ST) in ST range and 4-7 decibels (dB) in sound pressure level range after treatment. These small improvements stress the importance of reliable measurements. The aim was to evaluate the test-retest reliability of the dual-microphone computerized VRP on participants with healthy voices. This is a prospective test-retest reliability study. Dual-microphone VRPs were repeated twice on healthy participants (n = 37) with an interval of 6-37 days. Voice frequency and intensity (minimum, maximum, and ranges) were assessed in combination with the area of the VRP. Correlations between VRP parameters were high (r > 0.60). However, in the retest, a statistically significant increase in voice frequency range (1.4 ST [95% confidence interval {CI}: 0.8-2.1 ST], P VRP (148 cells [95% CI: 87-210 cells], P VRP is well below the differences seen after surgical or logopedic intervention, even when measuring in non-sound-treated rooms. There is a need for studies regarding inter-examiner reliability with a longer interval between test and retest before the assessment is fully reliable for clinical application. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Reliability Estimations of Control Systems Effected by Several Interference Sources
Deng Bei-xing; Jiang Ming-hu; Li Xing
2003-01-01
In order to estab lish the sufficient and necessary condition that arbitrarily reliable systems can not be construc-ted with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of in-terference sources are raised respectively: interference source possessing real input vectors and constant reliable interference source. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbi-trarily reliable systems can not be constructed with the ele-ments effected by above interference sources.
Analytic Estimation of Standard Error and Confidence Interval for Scale Reliability.
Raykov, Tenko
2002-01-01
Proposes an analytic approach to standard error and confidence interval estimation of scale reliability with fixed congeneric measures. The method is based on a generally applicable estimator stability evaluation procedure, the delta method. The approach, which combines wide-spread point estimation of composite reliability in behavioral scale…
Estimates of the global tidal range energy resource
Robins, Peter; Walkington, Ian
2017-04-01
Renewable energy generation through tidal lagoons and barrages is an attractive energy source due to tidal predictability and the potential for energy storage. Yet so far, the annual tidal range resource has only been estimated at relatively coarse spatial resolutions and without detailed investigation of the temporal variation from individual or aggregated sites. In this study, we estimate the theoretical tidal range resource of the northwest European shelf seas, using the 3D Regional Ocean Modelling System (ROMS) at roughly 1 km spatial resolution. Through tidal analysis of model output, we calculate the potential energy in both the rising and falling tides and, hence, show temporal variations in PE throughout the year. Based on a range of energy yield thresholds (rather than thresholds based on M2 range and water depth), we calculate the total annual theoretical resource from dual (flood and ebb) strategies. Using the FES global tidal model, which resolves tidal elevations at 1/16° resolution, the global resource was also estimated with the regions with the highest energy yield isolated. We discuss our estimates in relation to the yield that can actually be obtained mechanically, and in relation to the total energy flux of a region and the potential impacts of different lagoon scenarios on the local and far-field energy fluxes.
Fiducial Marker Detection and Pose Estimation From LIDAR Range Data
2010-03-01
Markers .................................................26 E. APPLICATIONS OF 3D POINT CLOUDS ................................................27 F...in the environment and produces 3D coordinates or range and bearing values. The raw data can be easily represented by point clouds , with each point...registration and pose estimation include cylinders, spheres, and orthogonal planes (Gao, 2007; Haas, 2005). E. APPLICATIONS OF LIDAR 3D POINT CLOUDS Robotics
A particle swarm model for estimating reliability and scheduling system maintenance
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Early Stage Software Reliability Estimation with Stochastic Reward Nets
ZHAO Jing; LIU Hong-wei; CUI Gang; YANG Xiao-zong
2005-01-01
This paper presents software reliability modeling issues at the early stage of a software development for fault tolerant software management system. Based on Stochastic Reward Nets, an effective model of hierarchical view for a fault tolerant software management system is put forward, and an approach that consists of system transient performance analysis is adopted. A quantitative approach for software reliability analysis is given. The results show its usefulness for the design and evaluation of the early-stage software reliability modeling when failure data is not available.
Reliable Multicast MAC Protocol for IEEE 802.11 Wireless LANs with Extended Service Range
Choi, Woo-Yong
2011-11-01
In this paper, we propose the efficient reliable multicast MAC protocol by which the AP (Access Point) can transmit reliably its multicast data frames to the recipients in the AP's one-hop or two-hop transmission range. The AP uses the STAs (Stations) that are directly associated with itself as the relays for the data delivery to the remote recipients that cannot be reached directly from itself. Based on the connectivity information among the recipients, the reliable multicast MAC protocol optimizes the number of the RAK (Request for ACK) frame transmissions in a reasonable computational time. Numerical examples show that our proposed MAC protocol significantly enhances the MAC performance compared with the BMMM (Batch Mode Multicast MAC) protocol that is extended to support the recipients that are in the AP's one-hop or two-hop transmission range in IEEE 802.11 wireless LANs.
Improving Sample Estimate Reliability and Validity with Linked Ego Networks
Lu, Xin
2012-01-01
Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...
Spanning Trees and bootstrap reliability estimation in correlation based networks
Tumminello, M; Lillo, F; Micciché, S; Mantegna, R N
2006-01-01
We introduce a new technique to associate a spanning tree to the average linkage cluster analysis. We term this tree as the Average Linkage Minimum Spanning Tree. We also introduce a technique to associate a value of reliability to links of correlation based graphs by using bootstrap replicas of data. Both techniques are applied to the portfolio of the 300 most capitalized stocks traded at New York Stock Exchange during the time period 2001-2003. We show that the Average Linkage Minimum Spanning Tree recognizes economic sectors and sub-sectors as communities in the network slightly better than the Minimum Spanning Tree does. We also show that the average reliability of links in the Minimum Spanning Tree is slightly greater than the average reliability of links in the Average Linkage Minimum Spanning Tree.
Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.
Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang
2015-05-21
The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches.
Khan, M.A.; Kerkhoff, Hans G.
2013-01-01
Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an
Khan, Muhammad Aamir; Kerkhoff, Hans G.
2013-01-01
Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an
Operational Procedures for Optimized Reliability and Component Life Estimator (ORACLE)
1975-12-01
TOTAL FAILURE RATE AND TTF Figure 1. Block diagram of the reliability predicition program routines (cross hatched boxes), the required inputs and the...in some signifi- cant way, describe and/or identify the particular piece of equipment associated with the parts or module. The maintenance of a
High resolution, large dynamic range field map estimation
Dagher, Joseph; Reese, Timothy; Bilgin, Ali
2013-01-01
Purpose We present a theory and a corresponding method to compute high resolution field maps over a large dynamic range. Theory and Methods We derive a closed-form expression for the error in the field map value when computed from two echoes. We formulate an optimization problem to choose three echo times which result in a pair of maximally distinct error distributions. We use standard field mapping sequences at the prescribed echo times. We then design a corresponding estimation algorithm which takes advantage of the optimized echo times to disambiguate the field offset value. Results We validate our method using high resolution images of a phantom at 7T. The resulting field maps demonstrate robust mapping over both a large dynamic range, and in low SNR regions. We also present high resolution offset maps in vivo using both, GRE and MEGE sequences. Even though the proposed echo time spacings are larger than the well known phase aliasing cutoff, the resulting field maps exhibit a large dynamic range without the use of phase unwrapping or spatial regularization techniques. Conclusion We demonstrate a novel 3-echo field map estimation method which overcomes the traditional noise-dynamic range trade-off. PMID:23401245
Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.
The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....
Engineer’s estimate reliability and statistical characteristics of bids
Fariborz M. Tehrani
2016-12-01
Full Text Available The objective of this report is to provide a methodology for examining bids and evaluating the performance of engineer’s estimates in capturing the true cost of projects. This study reviews the cost development for transportation projects in addition to two sources of uncertainties in a cost estimate, including modeling errors and inherent variability. Sample projects are highway maintenance projects with a similar scope of the work, size, and schedule. Statistical analysis of engineering estimates and bids examines the adaptability of statistical models for sample projects. Further, the variation of engineering cost estimates from inception to implementation has been presented and discussed for selected projects. Moreover, the applicability of extreme values theory is assessed for available data. The results indicate that the performance of engineer’s estimate is best evaluated based on trimmed average of bids, excluding discordant bids.
Estimating the Reliability of Electronic Parts in High Radiation Fields
Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd
2008-01-01
Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.
J. Arora
2014-09-01
Full Text Available Dental ageing is important in medico legal cases when teeth are the only material available to the investigating agencies for identification of the deceased. Attrition, which is the wear of occlusal surface of tooth (a physiological change; can be used as a determinant parameter for this purpose. The present study has been undertaken to examine the reliability of attrition as a sole parameter for age estimation among North Western adult Indians. 109 (43males, 66 females single rooted freshly extracted teeth ranging in age from 18-75years were studied. Teeth were fixed, cleaned and sectioned labiolingually upto thickness of 1mm. Sections were then mounted and attrition was graded from 0-3 according to Gustafson’s method. Scores were subjected to regression equation to estimate age of an individual. Results of the present study revealed that this parameter is reliable in individuals of ≤ 60 years with an error of ±10years. However, periodontal disease severely affected the accuracy of age estimation from this parameter as is evident from the results. Statistically no significant difference was noted in absolute mean error of age in different age groups. No significant difference was observed in absolute mean error of age in both the sexes.
Reliability of range-of-motion measurement in the elbow and forearm.
Armstrong, A D; MacDermid, J C; Chinchalkar, S; Stevens, R S; King, G J
1998-01-01
The purpose of this study was to examine intratester, intertester, and interdevice reliability of range of motion measurements of the elbow and forearm. Elbow flexion and extension and forearm pronation and supination were measured on 38 subjects with elbow, forearm, or wrist disease by 5 testers. Standardized test methods and a randomized order of testing were used to test groups of patients with universal standard goniometers, a computerized goniometer, and a mechanical rotation measuring device. Intratester reliability was high for all 3 measuring devices. Meaningful changes in intratester range of motion measurements taken with a universal goniometer occur with 95% confidence if they are greater than 6 degrees for flexion, 7 degrees for extension, 8 degrees for pronation, and 8 degrees for supination. Intertester reliability was high for flexion and extension measurements with the computerized goniometer and moderate for flexion and extension measurements with the universal goniometer. Meaningful change in interobserver range of motion measurements was expected if the change was greater than 4 degrees for flexion and 6 degrees for extension with the computerized goniometer compared with 10 degrees and 10 degrees, respectively, if the universal goniometer was used. Intertester reliability was high for pronation and supination with all 3 devices. Meaningful change in forearm rotation is characterized by a minimum of 10 degrees for pronation and 11 degrees for supination with the universal goniometer. Reliable measurements of elbow and forearm arm movement are obtainable regardless of the level of experience when standardized methods are used. Measurement error was least for repeated measurements taken by the same tester with the same instrument and most when different instruments were used.
Steven E. Stemler
2004-03-01
Full Text Available This article argues that the general practice of describing interrater reliability as a single, unified concept is..at best imprecise, and at worst potentially misleading. Rather than representing a single concept, different..statistical methods for computing interrater reliability can be more accurately classified into one of three..categories based upon the underlying goals of analysis. The three general categories introduced and..described in this paper are: 1 consensus estimates, 2 consistency estimates, and 3 measurement estimates...The assumptions, interpretation, advantages, and disadvantages of estimates from each of these three..categories are discussed, along with several popular methods of computing interrater reliability coefficients..that fall under the umbrella of consensus, consistency, and measurement estimates. Researchers and..practitioners should be aware that different approaches to estimating interrater reliability carry with them..different implications for how ratings across multiple judges should be summarized, which may impact the..validity of subsequent study results.
Novel Software Reliability Estimation Model for Altering Paradigms of Software Engineering
Ritika Wason
2012-05-01
Full Text Available A number of different software engineering paradigms like Component-Based Software Engineering (CBSE, Autonomic Computing, Service-Oriented Computing (SOC, Fault-Tolerant Computing and many others are being researched currently. These paradigms denote a paradigm shift from the currently mainstream object-oriented paradigm and are altering the way we view, design, develop and exercise software. Though these paradigms indicate a major shift in the way we design and code software. However, we still rely on traditional reliability models for estimating the reliability of any of the above systems. This paper analyzes the underlying characteristics of these paradigms and proposes a novel Finite Automata Based Reliability model as a suitable model for estimating reliability of modern, complex, distributed and critical software applications. We further outline the basic framework for an intelligent, automata-based reliability model that can be used for accurate estimation of system reliability of software systems at any point in the software life cycle.
Estimation on the Reliability of Farm Vehicle Based on Artificial Neural Network
WANG Jinwu
2008-01-01
As a peculiar product in China today, farm vehicles play an important role in economic construction and development of the countryside, but its work reliability remains low. In this paper truncated tracking was used to solve the low reliability of farm vehicles. Relevant reliability data were obtained by tracking a certain model vehicle and conducting reliability experiments. Data analysis revealed the weakest part of the vehicle system was the engine assembly. The theory of Artificial Neural Network was employed to estimate a parameter of the reliability model based on self-adaptive linear neural network, and the reliability function educed by the estimation could provide important theory references for reliability reassignment, manufacture and management of farm transport vehicles.
Software Reliability Estimation of the Reactor Protection System for Lungmen Nuclear Power Station
Wang, Jung Ya; Chou, Hwai Pwu [Tsing Hua National University, Hsinchu (China)
2014-08-15
In this paper, a software reliability estimation method is applied to estimate the software reliability of the reactor protection system (RPS) for Lungmen ABWR. In order to estimate the software failure probability, a flow network model of software is constructed. The total number of executions and the execution time of each software statement are obtained, and the reliability of each statement is obtained. During the test, the one-time test scenario follows a Bernoulli distribution and the multiple-test scenarios follow a binomial distribution. The software reliability of the digital trip module (DTM) and the trip logic unit (TLU) of the RPS of Lungmen ABWR can then be estimated. The results show that the RPS software has a good reliability.
Estimating the reliability of eyewitness identifications from police lineups.
Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William
2016-01-12
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.
Empirical Study of Travel Time Estimation and Reliability
Ruimin Li; Huajun Chai; Jin Tang
2013-01-01
This paper explores the travel time distribution of different types of urban roads, the link and path average travel time, and variance estimation methods by analyzing the large-scale travel time dataset detected from automatic number plate readers installed throughout Beijing. The results show that the best-fitting travel time distribution for different road links in 15 min time intervals differs for different traffic congestion levels. The average travel time for all links on all days can b...
Reliability of panoramic radiography in chronological age estimation
Ramanpal Singh Makkad
2013-01-01
Full Text Available Introduction: There has been a strong relationship between the growth rate of bone and teeth, which can be utilized for the purpose of age identification of an individual. Aims and Objective: The present study was designed to determine the relationship between the dental age, the age from dental panoramic radiography, skeletal age, and chronological age. Materials and Methods: The study included 270 individuals, averaging between 17 years and 25 years of age from out-patient department of New Horizon Dental College and Hospital, Sakri, Bilaspur, Chhattisgarh, India, for third molar surgery. Panoramic and hand wrist radiographs were taken, the films were digitally processed for visualization of the wisdom teeth. The confirmations of ages were repeated again at an interval of 4 weeks by a radiologist. The extracted wisdom teeth were placed in 10% formalin and were examined by one dental surgeon to estimate the age on the basis of root formation. Student′s t-test was adopted for statistical analysis and probability (P value was calculated. Conclusion: Estimating the age of an individual was accurate by examining extracted third molar. Age estimation through panoramic radiography was highly accurate in upper right quadrant (mean = 0.72 and P = 0.077.
Reliability estimation for single dichotomous items based on Mokken's IRT model
Meijer, R R; Sijtsma, K; Molenaar, Ivo W
1995-01-01
Item reliability is of special interest for Mokken's nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of
Reliability estimation for single dichotomous items based on Mokken's IRT model
Meijer, Rob R.; Sijtsma, Klaas; Molenaar, Ivo W.
1995-01-01
Item reliability is of special interest for Mokken’s nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of
Coefficient Alpha as an Estimate of Test Reliability under Violation of Two Assumptions.
Zimmerman, Donald W.; And Others
1993-01-01
Coefficient alpha was examined through computer simulation as an estimate of test reliability under violation of two assumptions. Coefficient alpha underestimated reliability under violation of the assumption of essential tau-equivalence of subtest scores and overestimated it under violation of the assumption of uncorrelated subtest error scores.…
Estimation of Internal Consistency Reliability When Test Parts Vary in Effective Length.
Feldt, Leonard S.; Charter, Richard A.
2003-01-01
Evaluating a test's reliability often requires dividing it into 3 or more unequal parts, which causes violation of the tau equivalence assumption of Cronbach's alpha. This article presents a criterion for abandoning alpha and an approach for computing a more appropriate estimate of reliability, the Gilmer-Feldt coefficient. (Author)
Khan, M.A.; Kerkhoff, Hans G.
2013-01-01
System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring
Khan, Muhammad Aamir; Kerkhoff, Hans G.
2013-01-01
System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring
Fuel economy and range estimates for fuel cell powered automobiles
Steinbugler, M.; Ogden, J. [Princeton Univ., NJ (United States)
1996-12-31
While a number of automotive fuel cell applications have been demonstrated, including a golf cart, buses, and a van, these systems and others that have been proposed have utilized differing configurations ranging from direct hydrogen fuel cell-only power plants to fuel cell/battery hybrids operating on reformed methanol. To date there is no clear consensus on which configuration, from among the possible combinations of fuel cell, peaking device, and fuel type, is the most likely to be successfully commercialized. System simplicity favors direct hydrogen fuel cell vehicles, but infrastructure is lacking. Infrastructure favors a system using a liquid fuel with a fuel processor, but system integration and performance issues remain. A number of studies have analyzed particular configurations on either a system or vehicle scale. The objective of this work is to estimate, within a consistent framework, fuel economies and ranges for a variety of configurations using flexible models with the goal of identifying the most promising configurations and the most important areas for further research and development.
Santos, C; Pauchard, N; Guilloteau, A
2017-10-01
This study aimed to improve clinical examination techniques by determining the reliability of different methods to evaluate forearm movements. Two methods using the iPhone™ 5 and its gyroscope application (alone [I5] or attached to a selfie stick [ISS]) were compared with two conventional measurement devices (a plastic goniometer with a hand-held pencil [HHP] and a bubble goniometer [BG]) to evaluate the active range of movement (AROM) of the wrist during pronation and supination. Two independent groups of subjects took part in this prospective single-center diagnostic study: 20 healthy subjects and 20 patients. The four evaluation methods had high intra-observer consistency after three measurements (intra-class correlation coefficient [ICC] [3, 1] of 0.916 for the HHP; 0.944 for ISS; 0.925 for BG; 0.933 for I5) and excellent inter-observer reliability (ICC [2, k] of 0.926 for HHP; 0.934 for ISS; 0.899 for BG; 0.894 for I5), with an agreement of plus or minus 2°. When these devices are used with rigorous methodology, they are reliable for the goniometric evaluation of AROM of wrist pronation and supination. Copyright © 2017 SFCM. Published by Elsevier Masson SAS. All rights reserved.
Valls, Víctor; Cano, Cristina; Bellalta, Boris; Oliver, Miquel
2012-01-01
The paper presents two mechanisms for designing an on-demand, reliable and efficient collection protocol for Wireless Sensor Networks. The former is the Bidirectional Link Quality Estimation, which allows nodes to easily and quickly compute the quality of a link between a pair of nodes. The latter, Hierarchical Range Sectoring, organizes sensors in different sectors based on their location within the network. Based on this organization, nodes from each sector are coordinated to transmit in specific periods of time to reduce the hidden terminal problem. To evaluate these two mechanisms, a protocol called HBCP (Hierarchical-Based Collection Protocol), that implements both mechanisms, has been implemented in TinyOS 2.1, and evaluated in a testbed using TelosB motes. The results show that the HBCP protocol is able to achieve a very high reliability, especially in large networks and in scenarios with bottlenecks.
Reliability/Cost Evaluation on Power System connected with Wind Power for the Reserve Estimation
Lee, Go-Eun; Cha, Seung-Tae; Shin, Je-Seok;
2012-01-01
Wind power is ideally a renewable energy with no fuel cost, but has a risk to reduce reliability of the whole system because of uncertainty of the output. If the reserve of the system is increased, the reliability of the system may be improved. However, the cost would be increased. Therefore...... the reserve needs to be estimated considering the trade-off between reliability and economic aspects. This paper suggests a methodology to estimate the appropriate reserve, when wind power is connected to the power system. As a case study, when wind power is connected to power system of Korea, the effects...
An adaptive neuro fuzzy model for estimating the reliability of component-based software systems
Kirti Tyagi
2014-01-01
Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis
Zhou, Z.-G.; Tang, P.; Zhou, M.
2016-06-01
Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.
Estimating long-range dependence in time series: an evaluation of estimators implemented in R.
Stroe-Kunold, Esther; Stadnytska, Tetiana; Werner, Joachim; Braun, Simone
2009-08-01
Recent studies have shown that many physiological and behavioral processes can be characterized by long-range correlations. The Hurst exponent H of fractal analysis and the fractional-differencing parameter d of the ARFIMA methodology are useful for capturing serial correlations. In this study, we report on different estimators of H and d implemented in R, a popular and freely available software package. By means of Monte Carlo simulations, we analyzed the performance of (1) the Geweke-Porter-Hudak estimator, (2) the approximate maximum likelihood algorithm, (3) the smoothed periodogram approach, (4) the Whittle estimator, (5) rescaled range analysis, (6) a modified periodogram, (7) Higuchi's method, and (8) detrended fluctuation analysis. The findings-confined to ARFIMA (0, d, 0) models and fractional Gaussian noise-identify the best estimators for persistent and antipersistent series. Two examples combining these results with the step-by-step procedure proposed by Delignières et al. (2006) demonstrate how this evaluation can be used as a guideline in a typical research situation.
How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?
Short, Michelle A; Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A
2017-03-01
To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test-retest reliability of sleep diary estimates of school night sleep across 12 weeks. Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test-retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test-rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks.
Reliability and Analysis of Changes in Bite Marks at Different Time Intervals and Temperature Ranges
Parul Khare Sinha
2017-04-01
Full Text Available Objectives: The purpose of this study is to assess time-dependent changes in the morphology of bitemarks and to investigate the utility of matching bitemarks on both perishable and non-perishable objects with the passage of time at different temperatures. Subjects and Methods: The study was conducted at Maharana Pratap College of Dentistry and Research Centre, Gwalior, India. 20 volunteers were asked to bite 6 items each. These included perishable and nonperishable items. Perishable items were apple, banana and Burfi, (a milk-based popular sweet confectionary while non-perishable items included wax, clay, and rubber. Photographs were taken with a digital camera at 0-hours and 24-hours after biting these objects at temperature ranges of 24 ºC to 28 ºC and 36 ºC to 40 ºC, respectively. Life-size photographs of these bitten objects were printed on transparent overlays and compared to hand drawn transparencies prepared from suspect dentition using an X-ray viewer. The comparison of all the 960 transparencies was done by two researchers, independently. Results: All objects gave a positive identification of the biter on matching just after biting. After24-hours, all items also showed positive matching except banana and apples. Conclusion: This proposed method is simple, reliable and less technique sensitive. It narrows down the subjectivity of interpretation. It highlights that due to decomposition changes occur in perishable food items and more so in apples and bananas, making bitemarks less reliable evidence.
A Data-Driven Reliability Estimation Approach for Phased-Mission Systems
Hua-Feng He
2014-01-01
Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.
Estimating Between-Person and Within-Person Subscore Reliability with Profile Analysis.
Bulut, Okan; Davison, Mark L; Rodriguez, Michael C
2017-01-01
Subscores are of increasing interest in educational and psychological testing due to their diagnostic function for evaluating examinees' strengths and weaknesses within particular domains of knowledge. Previous studies about the utility of subscores have mostly focused on the overall reliability of individual subscores and ignored the fact that subscores should be distinct and have added value over the total score. This study introduces a profile reliability approach that partitions the overall subscore reliability into within-person and between-person subscore reliability. The estimation of between-person reliability and within-person reliability coefficients is demonstrated using subscores from number-correct scoring, unidimensional and multidimensional item response theory scoring, and augmented scoring approaches via a simulation study and a real data study. The effects of various testing conditions, such as subtest length, correlations among subscores, and the number of subtests, are examined. Results indicate that there is a substantial trade-off between within-person and between-person reliability of subscores. Profile reliability coefficients can be useful in determining the extent to which subscores provide distinct and reliable information under various testing conditions.
Influence Factors on the Value of Reliability Estimators in Marketing Research
2011-01-01
This paper is a literature review, with a conclusion that leaves open many doors for future research. In the first part are reviewed a series of qualitative and quantitative research characteristics. The second part explains briefly the reliability and validity of instruments used in qualitative and quantitative marketing research. The third part of the paper review a series of articles on the estimators of reliability, on their power, on their strengths and weaknesses. The conclusions of the...
Reliability estimation for multiunit nuclear and fossil-fired industrial energy systems
Sullivan, W. G.; Wilson, J. V.; Klepper, O. H.
1977-06-29
As petroleum-based fuels grow increasingly scarce and costly, nuclear energy may become an important alternative source of industrial energy. Initial applications would most likely include a mix of fossil-fired and nuclear sources of process energy. A means for determining the overall reliability of these mixed systems is a fundamental aspect of demonstrating their feasibility to potential industrial users. Reliability data from nuclear and fossil-fired plants are presented, and several methods of applying these data for calculating the reliability of reasonably complex industrial energy supply systems are given. Reliability estimates made under a number of simplifying assumptions indicate that multiple nuclear units or a combination of nuclear and fossil-fired plants could provide adequate reliability to meet industrial requirements for continuity of service.
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.
ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Implementation and Analysis of Probabilistic Methods for Gate-Level Circuit Reliability Estimation
WANG Zhen; JIANG Jianhui; YANG Guang
2007-01-01
The development of VLSI technology results in the dramatically improvement of the performance of integrated circuits. However, it brings more challenges to the aspect of reliability. Integrated circuits become more susceptible to soft errors. Therefore, it is imperative to study the reliability of circuits under the soft error. This paper implements three probabilistic methods (two pass, error propagation probability, and probabilistic transfer matrix) for estimating gate-level circuit reliability on PC. The functions and performance of these methods are compared by experiments using ISCAS85 and 74-series circuits.
An Allocation Scheme for Estimating the Reliability of a Parallel-Series System
Zohra Benkamra
2012-01-01
Full Text Available We give a hybrid two-stage design which can be useful to estimate the reliability of a parallel-series and/or by duality a series-parallel system. When the components' reliabilities are unknown, one can estimate them by sample means of Bernoulli observations. Let T be the total number of observations allowed for the system. When T is fixed, we show that the variance of the system reliability estimate can be lowered by allocation of the sample size T at components' level. This leads to a discrete optimization problem which can be solved sequentially, assuming T is large enough. First-order asymptotic optimality is proved systematically and validated T Monte Carlo simulation.
Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.
Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M
2015-05-01
Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology.
Kubala, S. Z.; Borchardt, M. T.; Den Hartog, D. J.; Holly, D. J.; Jacobson, C. M.; Morton, L. A.; Young, W. C.
2016-11-01
The Thomson scattering diagnostic on MST records both equilibrium and fluctuating electron temperature with a range capability of 10 eV-5 keV. Standard operation with two modified commercial Nd:YAG lasers allows measurements at rates of 1 kHz-25 kHz. Several subsystems of the diagnostic are being improved. The power supplies for the avalanche photodiode detectors (APDs) that record the scattered light are being replaced to improve usability, reliability, and maintainability. Each of the 144 APDs will have an individual rack mounted switching supply, with bias voltage adjustable to match the APD. Long-wavelength filters (1140 nm center, 80 nm bandwidth) have been added to the polychromators to improve capability to resolve non-Maxwellian distributions and to enable directed electron flow measurements. A supercontinuum (SC) pulsed white light source has replaced the tungsten halogen lamp previously used for spectral calibration of the polychromators. The SC source combines substantial brightness produced in nanosecond pulses with a spectrum that covers the entire range of the polychromators.
Reliability estimation for 18Ni steel under low cycle fatigue using probabilistic technique
Lee, Ouk Sub; Choi, Hye Bin; Kim, Dong Hyeok; Kim, Hong Min [Inha Univ., Incheon (Korea, Republic of)
2008-07-01
In this study, the fatigue life of 18Ni Maraging steel under both low and high cyclic conditions is estimated by using FORM (First Order Reliability Method). Fatigue models based on strain approach such as coffin? Manson Fatigue theory and Morrow mean stress method are utilized. The limit state function including these two models was established. A case study for a material with the given special material properties was carried out to show the application of the proposed process of the reliability estimation. The effect of mean stress of the varying fatigue loading on the failure probability has also been investigated.
Estimation of Reliability and Cost Relationship for Architecture-based Software
Hui Guan; Wei-Ru Chen; Ning Huang; Hong-Ji Yang
2010-01-01
In this paper, we propose a new method to estimate the relationship between software reliability and software development cost taking into account the complexity for developing the software system and the size of software intended to develop during the implementation phase of the software development life cycle. On the basis of estimated relationship, a set of empirical data has been used to validate the correctness of the proposed model by comparing the result with the other existing models. The outcome of this work shows that the method proposed here is a relatively straightforward one in formulating the relationship between reliability and cost during implementation phase.
Highly reliable wind-rolling triboelectric nanogenerator operating in a wide wind speed range
Yong, Hyungseok; Chung, Jihoon; Choi, Dukhyun; Jung, Daewoong; Cho, Minhaeng; Lee, Sangmin
2016-09-01
Triboelectric nanogenerators are aspiring energy harvesting methods that generate electricity from the triboelectric effect and electrostatic induction. This study demonstrates the harvesting of wind energy by a wind-rolling triboelectric nanogenerator (WR-TENG). The WR-TENG generates electricity from wind as a lightweight dielectric sphere rotates along the vortex whistle substrate. Increasing the kinetic energy of a dielectric converted from the wind energy is a key factor in fabricating an efficient WR-TENG. Computation fluid dynamics (CFD) analysis is introduced to estimate the precise movements of wind flow and to create a vortex flow by adjusting the parameters of the vortex whistle shape to optimize the design parameters to increase the kinetic energy conversion rate. WR-TENG can be utilized as both a self-powered wind velocity sensor and a wind energy harvester. A single unit of WR-TENG produces open-circuit voltage of 11.2 V and closed-circuit current of 1.86 μA. Additionally, findings reveal that the electrical power is enhanced through multiple electrode patterns in a single device and by increasing the number of dielectric spheres inside WR-TENG. The wind-rolling TENG is a novel approach for a sustainable wind-driven TENG that is sensitive and reliable to wind flows to harvest wasted wind energy in the near future.
Estimated Value of Service Reliability for Electric Utility Customers in the United States
Sullivan, M.J.; Mercurio, Matthew; Schellenberg, Josh
2009-06-01
Information on the value of reliable electricity service can be used to assess the economic efficiency of investments in generation, transmission and distribution systems, to strategically target investments to customer segments that receive the most benefit from system improvements, and to numerically quantify the risk associated with different operating, planning and investment strategies. This paper summarizes research designed to provide estimates of the value of service reliability for electricity customers in the US. These estimates were obtained by analyzing the results from 28 customer value of service reliability studies conducted by 10 major US electric utilities over the 16 year period from 1989 to 2005. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods it was possible to integrate their results into a single meta-database describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the US for industrial, commercial, and residential customers. Estimated interruption costs for different types of customers and of different duration are provided. Finally, additional research and development designed to expand the usefulness of this powerful database and analysis are suggested.
Influence of statistical time range on estimating seismicity parameters
无
2001-01-01
The influence of non-uniqueness in selecting statistical time ranges on seismicity parameters of b value and annual mean occurrence rate n4 is widely analyzed and studied. The studied result states that the influence of statistical time range on the b value is generally smaller than on the annual mean rate. Owing to the exponentially functional relation between the annual mean rate and b value, the variation of b value by varying statistical time range brings about decrease or increase in the annual mean rates of each magnitude interval with power progression law. These results will exert a synthetic effect on seismic safety evaluation results in various regions in our country.
Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry
Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Chang, Hong (Inventor); Morris, Daniel Dale (Inventor); Graf, Jodi Seaborn (Inventor)
2016-01-01
A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.
2014-01-01
The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ≥ 100. In addition, the optimal α-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728
Donald D. Anderson
2012-01-01
Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.
Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens;
We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS
Z.-G. Zhou
2016-06-01
Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.
The Riso-Hudson Enneagram Type Indicator: Estimates of Reliability and Validity
Newgent, Rebecca A.; Parr, Patricia E.; Newman, Isadore; Higgins, Kristin K.
2004-01-01
This investigation was conducted to estimate the reliability and validity of scores on the Riso-Hudson Enneagram Type Indicator (D. R. Riso & R. Hudson, 1999a). Results of 287 participants were analyzed. Alpha suggests an adequate degree of internal consistency. Evidence provides mixed support for construct validity using correlational and…
Chaimowicz, F. (Flávio); A. Burdorf (Alex)
2015-01-01
textabstractBackground: The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia
Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…
Boermans, M.A.; Kattenberg, M.A.C.
2011-01-01
We show how to estimate a Cronbach's alpha reliability coefficient in Stata after running a principal component or factor analysis. Alpha evaluates to what extent items measure the same underlying content when the items are combined into a scale or used for latent variable. Stata allows for testing
Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…
Procedures for reliable estimation of viral fitness from time-series data
Bonhoeffer, S.; Barbour, A.D.; Boer, R.J. de
2002-01-01
In order to develop a better understanding of the evolutionary dynamics of HIV drug resistance, it is necessary to quantify accurately the in vivo fitness costs of resistance mutations. However, the reliable estimation of such fitness costs is riddled with both theoretical and experimental difficult
Reliable dual tensor model estimation in single and crossing fibers based on jeffreys prior
J. Yang (Jianfei); D.H.J. Poot; M.W.A. Caan (Matthan); Su, T. (Tanja); C.B. Majoie (Charles); L.J. van Vliet (Lucas); F. Vos (Frans)
2016-01-01
textabstractPurpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD).
Range Estimation for Indoor Positioning via Drifting Clocks
Bagdonas, Kazimieras; Schiøler, Henrik; Borre, Kai
2009-01-01
This paper presents results from the “Indoor Positioning” project conducted at Danish GPS Center (DGC), Aalborg University. We focus on creating theoretical background and experimental verification for a software based indoor positioning solution. We present a novel theory to improve the ranging...
Range Estimation for Indoor Positioning via Drifting Clocks
Bagdonas, Kazimieras; Schiøler, Henrik; Borre, Kai
2009-01-01
This paper presents results from the “Indoor Positioning” project conducted at Danish GPS Center (DGC), Aalborg University. We focus on creating theoretical background and experimental verification for a software based indoor positioning solution. We present a novel theory to improve the ranging...
Monica C Junkes
Full Text Available The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version.After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30 were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes.The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect. In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593 and income (rs = 0.327 and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis.The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.
Koen Cuypers
Full Text Available The goal of this study was to optimize the transcranial magnetic stimulation (TMS protocol for acquiring a reliable estimate of corticospinal excitability (CSE using single-pulse TMS. Moreover, the minimal number of stimuli required to obtain a reliable estimate of CSE was investigated. In addition, the effect of two frequently used stimulation intensities [110% relative to the resting motor threshold (rMT and 120% rMT] and gender was evaluated. Thirty-six healthy young subjects (18 males and 18 females participated in a double-blind crossover procedure. They received 2 blocks of 40 consecutive TMS stimuli at either 110% rMT or 120% rMT in a randomized order. Based upon our data, we advise that at least 30 consecutive stimuli are required to obtain the most reliable estimate for CSE. Stimulation intensity and gender had no significant influence on CSE estimation. In addition, our results revealed that for subjects with a higher rMT, fewer consecutive stimuli were required to reach a stable estimate of CSE. The current findings can be used to optimize the design of similar TMS experiments.
Cuypers, Koen; Thijs, Herbert; Meesen, Raf L J
2014-01-01
The goal of this study was to optimize the transcranial magnetic stimulation (TMS) protocol for acquiring a reliable estimate of corticospinal excitability (CSE) using single-pulse TMS. Moreover, the minimal number of stimuli required to obtain a reliable estimate of CSE was investigated. In addition, the effect of two frequently used stimulation intensities [110% relative to the resting motor threshold (rMT) and 120% rMT] and gender was evaluated. Thirty-six healthy young subjects (18 males and 18 females) participated in a double-blind crossover procedure. They received 2 blocks of 40 consecutive TMS stimuli at either 110% rMT or 120% rMT in a randomized order. Based upon our data, we advise that at least 30 consecutive stimuli are required to obtain the most reliable estimate for CSE. Stimulation intensity and gender had no significant influence on CSE estimation. In addition, our results revealed that for subjects with a higher rMT, fewer consecutive stimuli were required to reach a stable estimate of CSE. The current findings can be used to optimize the design of similar TMS experiments.
Alaa F. Sheta
2016-04-01
Full Text Available In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM, the Power Model (POWM and the Delayed S-Shaped Model (DSSM. In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
Estimation and enhancement of real-time software reliability through mutation analysis
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
Object recognition and pose estimation of planar objects from range data
Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael
1994-01-01
The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and
Ways to increase the reliability of earthquake loss estimations in emergency mode
Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander
2016-04-01
The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage
Estimating uncertainty and reliability of social network data using Bayesian inference.
Farine, Damien R; Strandburg-Peshkin, Ariana
2015-09-01
Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.
Kinematic parameter estimation using close range photogrammetry for sport applications
Magre Colorado, Luz Alejandra; Martínez Santos, Juan Carlos
2015-12-01
In this article, we show the development of a low-cost hardware/software system based on close range photogrammetry to track the movement of a person performing weightlifting. The goal is to reduce the costs to the trainers and athletes dedicated to this sport when it comes to analyze the performance of the sportsman and avoid injuries or accidents. We used a web-cam as the data acquisition hardware and develop the software stack in Processing using the OpenCV library. Our algorithm extracts size, position, velocity, and acceleration measurements of the bar along the course of the exercise. We present detailed characteristics of the system with their results in a controlled setting. The current work improves the detection and tracking capabilities from a previous version of this system by using HSV color model instead of RGB. Preliminary results show that the system is able to profile the movement of the bar as well as determine the size, position, velocity, and acceleration values of a marker/target in scene. The average error finding the size of object at four meters of distance is less than 4%, and the error of the acceleration value is 1.01% in average.
Lee, Song; Choi, Joon Il; Park, Michael Yong; Yeo, Dong Myung; Byun, Jae Young; Jung, Seung Eun; Rha, Sung Eun; Oh, Soon Nam; Lee, Young Joon [Dept. of Radiology, Seoul St. Mary' s Hospital, The Catholic University of Korea College of Medicine, Seoul (Korea, Republic of)
2014-04-15
To evaluate intra- and interobserver reliability of the gray scale/dynamic range of the phantom image evaluation of ultrasonography using a standardized phantom, and to assess the effect of interactive education on the reliability. Three radiologists (a resident, and two board-certified radiologists with 2 and 7 years of experience in evaluating ultrasound phantom images) performed the gray scale/dynamic range test for an ultrasound machine using a standardized phantom. They scored the number of visible cylindrical structures of varying degrees of brightness and made a pass or fail decision. First, they scored 49 phantom images twice from a 2010 survey with limited knowledge of phantom images. After this, the radiologists underwent two hours of interactive education for the phantom images and scored another 91 phantom images from a 2011 survey twice. Intra- and interobserver reliability before and after the interactive education session were analyzed using K analyses. Before education, the K-value for intraobserver reliability for the radiologist with 7 years of experience, 2 years of experience, and the resident was 0.386, 0.469, and 0.465, respectively. After education, the K-values were improved (0.823, 0.611, and 0.711, respectively). For interobserver reliability, the K-value was also better after the education for the 3 participants (0.067, 0.002, and 0.547 before education; 0.635, 0.667, and 0.616 after education, respectively). The intra- and interobserver reliability of the gray scale/dynamic range was fair to substantial. Interactive education can improve reliability. For more reliable results, double- checking of phantom images by multiple reviewers is recommended.
J. Gogoi
2012-01-01
Full Text Available This paper deals with the stress vs. strength problem incorporating multi-componentsystems viz. standby redundancy. The models developed have been illustrated assuming that allthe components in the system for both stress and strength are independent and follow differentprobability distributions viz. Exponential, Gamma and Lindley. Four different conditions forstress and strength have been considered for this investigation. Under these assumptions thereliabilities of the system have been obtained with the help of the particular forms of densityfunctions of n-standby system when all stress-strengths are random variables. The expressions forthe marginal reliabilities R(1, R(2, R(3 etc. have been derived based on its stress- strengthmodels. Then the corresponding system reliabilities Rn have been computed numerically andpresented in tabular forms for different stress-strength distributions with different values of theirparameters. Here we consider n 3 for estimating the system reliability R3.
Nyman, R. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hegedus, D.; Tomic, B. [ENCONET Consulting GesmbH, Vienna (Austria); Lydell, B. [RSA Technologies, Vista, CA (United States)
1997-12-01
This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed `PFCA`-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies. 63 refs, 30 tabs, 22 figs.
王鹭; 张利; 王学芝
2015-01-01
As the central component of rotating machine, the performance reliability assessment and remaining useful lifetime prediction of bearing are of crucial importance in condition-based maintenance to reduce the maintenance cost and improve the reliability. A prognostic algorithm to assess the reliability and forecast the remaining useful lifetime (RUL) of bearings was proposed, consisting of three phases. Online vibration and temperature signals of bearings in normal state were measured during the manufacturing process and the most useful time-dependent features of vibration signals were extracted based on correlation analysis (feature selection step). Time series analysis based on neural network, as an identification model, was used to predict the features of bearing vibration signals at any horizons (feature prediction step). Furthermore, according to the features, degradation factor was defined. The proportional hazard model was generated to estimate the survival function and forecast the RUL of the bearing (RUL prediction step). The positive results show that the plausibility and effectiveness of the proposed approach can facilitate bearing reliability estimation and RUL prediction.
Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.
Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A
2015-01-01
The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.
Zhanshan Wang; Longhu Quan; Xiuchong Liu
2014-01-01
The control of a high performance alternative current (AC) motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI) controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM). In order to guarantee the accuracy of rot...
Lane, Ginny G.; White, Amy E.; Henson, Robin K.
2002-01-01
Conducted a reliability generalizability study on the Coopersmith Self-Esteem Inventory (CSEI; S. Coopersmith, 1967) to examine the variability of reliability estimates across studies and to identify study characteristics that may predict this variability. Results show that reliability for CSEI scores can vary considerably, especially at the…
Bahman Tarvirdizade
2014-01-01
Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.
Dhatt, Sharmistha
2016-01-01
Reliability of kinetic parameters are crucial in understanding enzyme kinetics within cellular system. The present study suggests a few cautions that need introspection for estimation of parameters like K(M), V(max) and K(I) using Lineweaver-Burk plots. The quality of IC(50) too needs a thorough reinvestigation because of its direct link with K(I) and K(M) values. Inhibition kinetics under both steady-state and non-steady-state conditions are studied and errors in estimated parameters are compared against actual values to settle the question of their adequacy.
Fang, Chih-Chiang; Yeh, Chun-Wu
2016-09-01
The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.
Zaporozhanov V.A.
2012-12-01
Full Text Available In the conditions of sporting-pedagogical practices objective estimation of potential possibilities gettings busy already on the initial stages of long-term preparation examined as one of issues of the day. The proper quantitative information allows to individualize preparation of gettings in obedience to requirements to the guided processes busy. Research purpose - logically and metrical to rotin expedience of metrical method of calculations of reliability of results of the control measurings, in-use for diagnostics of psychophysical fitness and prognosis of growth of trade gettings busy in the select type of sport. Material and methods. Analysed the results of the control measurings on four indexes of psychophysical preparedness and estimation of experts of fitness 24th gettings busy composition of children of gymnastic school. The results of initial and final inspection of gymnasts on the same control tests processed the method of mathematical statistics. Expected the metrical estimations of reliability of measurings is stability, co-ordination and informing of control information for current diagnostics and prognosis of sporting possibilities inspected. Results. Expedience of the use in these aims of metrical operations of calculation of complex estimation of the psychophysical state of gettings busy is metrology grounded. Conclusions. Research results confirm expedience of calculation of complex estimation of psychophysical features gettings busy for diagnostics of fitness in the select type of sport and trade prognosis on the subsequent stages of preparation.
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.
Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina
2015-01-01
Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis
Adela-Eliza Dumitrascu
2015-01-01
Full Text Available Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram, which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
On the efficiency and reliability of cluster mass estimates based on member galaxies
Biviano, A; Diaferio, A; Dolag, K; Girardi, M; Murante, G
2006-01-01
We study the efficiency and reliability of cluster mass estimators that are based on the projected phase-space distribution of galaxies in a cluster region. To this aim, we analyse a data-set of 62 clusters extracted from a concordance LCDM cosmological hydrodynamical simulation. Galaxies (or Dark Matter particles) are first selected in cylinders of given radius (from 0.5 to 1.5 Mpc/h) and ~200 Mpc/h length. Cluster members are then identified by applying a suitable interloper removal algorithm. Two cluster mass estimators are considered: the virial mass estimator (Mvir), and a mass estimator (Msigma) based entirely on the cluster velocity dispersion estimate. Mvir overestimates the true mass by ~10%, and Msigma underestimates the true mass by ~15%, on average, for sample sizes of > 60 cluster members. For smaller sample sizes, the bias of the virial mass estimator substantially increases, while the Msigma estimator becomes essentially unbiased. The dispersion of both mass estimates increases by a factor ~2 a...
Using operational data to estimate the reliable yields of water-supply wells
Misstear, Bruce D. R.; Beeson, Sarah
The reliable yield of a water-supply well depends on many different factors, including the properties of the well and the aquifer; the capacities of the pumps, raw-water mains, and treatment works; the interference effects from other wells; and the constraints imposed by ion licences, water quality, and environmental issues. A relatively simple methodology for estimating reliable yields has been developed that takes into account all of these factors. The methodology is based mainly on an analysis of water-level and source-output data, where such data are available. Good operational data are especially important when dealing with wells in shallow, unconfined, fissure-flow aquifers, where actual well performance may vary considerably from that predicted using a more analytical approach. Key issues in the yield-assessment process are the identification of a deepest advisable pumping water level, and the collection of the appropriate well, aquifer, and operational data. Although developed for water-supply operators in the United Kingdom, this approach to estimating the reliable yields of water-supply wells using operational data should be applicable to a wide range of hydrogeological conditions elsewhere. Résumé La productivité d'un puits capté pour l'adduction d'eau potable dépend de différents facteurs, parmi lesquels les propriétés du puits et de l'aquifère, la puissance des pompes, le traitement des eaux brutes, les effets d'interférences avec d'autres puits et les contraintes imposées par les autorisations d'exploitation, par la qualité des eaux et par les conditions environnementales. Une méthodologie relativement simple d'estimation de la productivité qui prenne en compte tous ces facteurs a été mise au point. Cette méthodologie est basée surtout sur une analyse des données concernant le niveau piézométrique et le débit de prélèvement, quand ces données sont disponibles. De bonnes données opérationnelles sont particuli
Zhanshan Wang
2014-01-01
Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.
Generating human reliability estimates using expert judgment. Volume 1. Main report
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report.
Generating human reliability estimates using expert judgment. Volume 2. Appendices. [PWR; BWR
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessments (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 2 provides detailed procedures for using the techniques, detailed descriptions of the analyses performed to evaluate the techniques, and HEP estimates generated as part of this project. The results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. Judgments were shown to be consistent and to provide HEP estimates with a good degree of convergent validity. Of the two techniques tested, direct numerical estimation appears to be preferable in terms of ease of application and quality of results.
RELAP5/MOD3.3 Best Estimate Analyses for Human Reliability Analysis
Andrej Prošek
2010-01-01
Full Text Available To estimate the success criteria time windows of operator actions the conservative approach was used in the conventional probabilistic safety assessment (PSA. The current PSA standard recommends the use of best-estimate codes. The purpose of the study was to estimate the operator action success criteria time windows in scenarios in which the human actions are supplement to safety systems actuations, needed for updated human reliability analysis (HRA. For calculations the RELAP5/MOD3.3 best estimate thermal-hydraulic computer code and the qualified RELAP5 input model representing a two-loop pressurized water reactor, Westinghouse type, were used. The results of deterministic safety analysis were examined what is the latest time to perform the operator action and still satisfy the safety criteria. The results showed that uncertainty analysis of realistic calculation in general is not needed for human reliability analysis when additional time is available and/or the event is not significant contributor to the risk.
Singh, A.; Deeds, N.; Kelley, V.
2012-12-01
Estimating how much water is being used by various water users is key to effective management and optimal utilization of groundwater resources. This is especially true for aquifers like the Ogallala that are severely stressed and display depleting trends over the last many years. The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Water users within the 16 counties that comprise the HPWD draw from the Ogallala extensively. The HPWD has recently proposed flow-meters as well as various 'alternative methods' for water users to report water usage. Alternative methods include using a) site specific energy conversion factors to convert total amount of energy used (for pumping stations) to water pumped, b) reporting nozzle package (on center pivot irrigation systems) specifications and hours of usage, and c) reporting concentrated animal feeding operations (CAFOs). The focus of this project was to evaluate the reliability and effectiveness for each of these water use estimation techniques for regulatory purposes. Reliability and effectiveness of direct flow-metering devices was also addressed. Findings indicate that due to site-specific variability and hydrogeologic heterogeneity, alternative methods for estimating water use can have significant uncertainties associated with water use estimates. The impact of these uncertainties on overall water usage, conservation, and management was also evaluated. The findings were communicated to the Stakeholder Advisory Group and the Water Conservation District with guidelines and recommendations on how best to implement the various techniques.
A Penalized Likelihood Approach to Parameter Estimation with Integral Reliability Constraints
Barry Smith
2015-06-01
Full Text Available Stress-strength reliability problems arise frequently in applied statistics and related fields. Often they involve two independent and possibly small samples of measurements on strength and breakdown pressures (stress. The goal of the researcher is to use the measurements to obtain inference on reliability, which is the probability that stress will exceed strength. This paper addresses the case where reliability is expressed in terms of an integral which has no closed form solution and where the number of observed values on stress and strength is small. We find that the Lagrange approach to estimating constrained likelihood, necessary for inference, often performs poorly. We introduce a penalized likelihood method and it appears to always work well. We use third order likelihood methods to partially offset the issue of small samples. The proposed method is applied to draw inferences on reliability in stress-strength problems with independent exponentiated exponential distributions. Simulation studies are carried out to assess the accuracy of the proposed method and to compare it with some standard asymptotic methods.
Quek, June; Sandra G. Brauer; Treleaven, Julia; Pua, Yong-Hao; Mentiplay, Benjamin; Clark, Ross Allan
2014-01-01
Background Concurrent validity and intra-rater reliability using a customized Android phone application to measure cervical-spine range-of-motion (ROM) has not been previously validated against a gold-standard three-dimensional motion analysis (3DMA) system. Findings Twenty-one healthy individuals (age:31 ± 9.1 years, male:11) participated, with 16 re-examined for intra-rater reliability 1–7 days later. An Android phone was fixed on a helmet, which was then securely fastened on the participan...
S. Van Niekerk
2008-02-01
Full Text Available Measuring upper quadrant posture and movement is a challenge to researchers and clinicians. A range of postural measurement tools is commonly used in the clinical setting and in research projects to evaluate postural align-ment, but information about the validity and reliability of these tools and thus as election of the optimal tool for a specific project is often uncertain. This reviewaims to make recommendations to clinicians and researchers regarding practical,valid and reliable tools to assess upper quadrant posture and range of motion.Electronic databases and key journals were searched. An adapted appraisal toolwas utilised to assess the methodology for each of the nine selected articles. Nine eligible articles reporting on thegoniometer, flexicurve and inclinometer were included. This review highlights the fact that a range of two-dimensional(2D posture measurement tools are being used in clinical practice and research. Although the findings for the reliability and validity of the tools included in this review appear to be promising, strong recommendations are limited by the imprecision of the results. Thus, the primary issue hampering the recommendation for the most reliable and valid tool to use in the clinical or research setting is due to the limitations pertaining the analysis of the data, and the interpretation thereof.
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
Prato, Carlo Giacomo; Rasmussen, Thomas Kjær; Nielsen, Otto Anker
2014-01-01
both congestion and reliability terms. Results illustrated that the value of time and the value of congestion were significantly higher in the peak period because of possible higher penalties for drivers being late and consequently possible higher time pressure. Moreover, results showed...... that the marginal rate of substitution between travel time reliability and total travel time did not vary across periods and traffic conditions, with the obvious caveat that the absolute values were significantly higher for the peak period. Last, results showed the immense potential of exploiting the growing...... availability of large amounts of data from cheap and enhanced technology to obtain estimates of the monetary value of different travel time components from the observation of actual behavior, with arguably potential significant impact on the realism of large-scale models....
LUMOS - A Sensitive and Reliable Optode System for Measuring Dissolved Oxygen in the Nanomolar Range
Lehner, Philipp; Larndorfer, Christoph; Garcia-Robledo, Emilio;
2015-01-01
Most commercially available optical oxygen sensors target the measuring range of 300 to 2 mu mol L-1. However these are not suitable for investigating the nanomolar range which is relevant for many important environmental situations. We therefore developed a miniaturized phase fluorimeter based m...
zahra Hooshyari
2013-04-01
Full Text Available Objective: the aim of the present study was the estimation of validation and reliability test of ASSIST instrument in Iran. Method: our research populations were Iranian alcohol and drugs users and abusers in the year 1390 that had referred to rehabilitation camps and addiction treatment centers for self-improving. Sample sizes of 2600, average age 36/5, were selected by cluster random sampling in eight provinces. The ASSIST and demographic form exercised for all of sample group. Also in order to validity estimation, 300 number of main sample we interviewed by ASI, SDS, DAST and DSM-IV criteria. Findings: ASSIST reliability estimated by Cronbach’s alpha for all of domains was between %79 to %95. Data analyses showed fair criteria, construct, discriminate and multi dimension validity. These types of validity for other domains were Discriminative validity of the ASSIST was investigated by comparison of ASSIST scores as groupes of dependence, abuser and user. There were significant confirmation between this scores and DSM-IV scores. Construct validity of the ASSIST was investigated by statistical comparison with health scores. ASSIST's cut off points classify clients in 3 categories in term of intensity of addiction. Conclusion: we surely recommend researchers to use this instrument in research and screening purposes or other situations in Iran.
Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States
Sullivan, Michael [Nexant Inc., Burlington, MA (United States); Schellenberg, Josh [Nexant Inc., Burlington, MA (United States); Blundell, Marshall [Nexant Inc., Burlington, MA (United States)
2015-01-01
This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.
Denise Güthlin
Full Text Available Due to time and financial constraints indices are often used to obtain landscape-scale estimates of relative species abundance. Using two different field methods and comparing the results can help to detect possible bias or a non monotonic relationship between the index and the true abundance, providing more reliable results. We used data obtained from camera traps and feces counts to independently estimate relative abundance of red foxes in the Black Forest, a forested landscape in southern Germany. Applying negative binomial regression models, we identified landscape parameters that influence red fox abundance, which we then used to predict relative red fox abundance. We compared the estimated regression coefficients of the landscape parameters and the predicted abundance of the two methods. Further, we compared the costs and the precision of the two field methods. The predicted relative abundances were similar between the two methods, suggesting that the two indices were closely related to the true abundance of red foxes. For both methods, landscape diversity and edge density best described differences in the indices and had positive estimated effects on the relative fox abundance. In our study the costs of each method were of similar magnitude, but the sample size obtained from the feces counts (262 transects was larger than the camera trap sample size (88 camera locations. The precision of the camera traps was lower than the precision of the feces counts. The approach we applied can be used as a framework to compare and combine the results of two or more different field methods to estimate abundance and by this enhance the reliability of the result.
Murray, Aja Louise; Booth, Tom; McKenzie, Karen; Kuenssberg, Renate
2016-06-01
It has previously been noted that inventories measuring traits that originated in a psychopathological paradigm can often reliably measure only a very narrow range of trait levels that are near and above clinical cutoffs. Much recent work has, however, suggested that autism spectrum disorder traits are on a continuum of severity that extends well into the nonclinical range. This implies a need for inventories that can capture individual differences in autistic traits from very high levels all the way to the opposite end of the continuum. The Autism-Spectrum Quotient (AQ) was developed based on a closely related rationale, but there has, to date, been no direct test of the range of trait levels that the AQ can reliably measure. To assess this, we fit a bifactor item response theory model to the AQ. Results suggested that AQ measures moderately low to moderately high levels of a general autistic trait with good measurement precision. The reliable range of measurement was significantly improved by scoring the instrument using its 4-point response scale, rather than dichotomizing responses. These results support the use of the AQ in nonclinical samples, but suggest that items measuring very low and very high levels of autistic traits would be beneficial additions to the inventory. (PsycINFO Database Record
Near field phased array DOA and range estimation of UHF RFID tags
Huiting, Jordy; Kokkeler, André B.J.; Smit, Gerard J.M.
2015-01-01
This paper presents a near field localization system based on a phased array for UHF RFID tags. To estimate angle and range the system uses a two-dimensional MUSIC algorithm. A four channel phased array is used to experimentally verify the estimation of angle and range for an EPC gen2 tag. The syste
Near field phased array DOA and range estimation of UHF RFID tags
Huiting, J.; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria
2015-01-01
This paper presents a near field localization system based on a phased array for UHF RFID tags. To estimate angle and range the system uses a two-dimensional MUSIC algorithm. A four channel phased array is used to experimentally verify the estimation of angle and range for an EPC gen2 tag. The
A state-space model for estimating detailed movements and home range from acoustic receiver data
Pedersen, Martin Wæver; Weng, Kevin
2013-01-01
We present a state-space model for acoustic receiver data to estimate detailed movement and home range of individual fish while accounting for spatial bias. An integral part of the approach is the detection function, which models the probability of logging tag transmissions as a function...... is used to estimate home range and movement of a reef fish in the Pacific Ocean....
48 CFR 1405.404 - Release of long-range acquisition estimates.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Release of long-range acquisition estimates. 1405.404 Section 1405.404 Federal Acquisition Regulations System DEPARTMENT OF THE... Release of long-range acquisition estimates....
48 CFR 1305.404 - Release of long-range acquisition estimates.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Release of long-range acquisition estimates. 1305.404 Section 1305.404 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE... long-range acquisition estimates....
Rater reliability of fragile X mutation size estimates: A multilaboratory analysis
Fisch, G.S. [Kings County Hospital Center and SUNY/Health Science Center, Brooklyn, NY (United States); Carpenter, N. [Chapman Institute of Medical Genetics, Tulsa, OK (United States); Maddalena, A. [Medical College of Virginia, Richmond, VA (United States)] [and others
1996-08-09
Notwithstanding the use of comparable molecular protocols, description and measurement of the fra(X) (fragile X) mutation may vary according to its appearance as a discrete band, smear, multiple bands, or mosaic. Estimation of mutation size may also differ from one laboratory to another. We report on the description of a mutation size estimate for a large sample of individuals tested for the fra(X) pre- or full mutation. Of 63 DNA samples evaluated, 45 were identified previously as fra(X) pre- or full mutations. DNA from 18 unaffected individuals was used as control. Genomic DNA was extracted from peripheral blood, and DNA fragments from each of four laboratories were sent to a single center where Southern blots were prepared and hybridized with the pE5.1 probe. Photographs from autoradiographs were returned to each site, and raters blind to the identity of the specimens were asked to evaluate them. Raters` estimates of mutation size compared favorably with a reference test. Intrarater reliability was good to excellent. Variability in mutation size estimates was comparable across band types. Variability in estimates was moderate, and was significantly correlated with absolute mutation size and band type. 9 refs., 1 fig., 3 tabs.
Range and Size Estimation Based on a Coordinate Transformation Model for Driving Assistance Systems
Wu, Bing-Fei; Lin, Chuan-Tsai; Chen, Yen-Lin
This paper presents new approaches for the estimation of range between the preceding vehicle and the experimental vehicle, estimation of vehicle size and its projective size, and dynamic camera calibration. First, our proposed approaches adopt a camera model to transform coordinates from the ground plane onto the image plane to estimate the relative position between the detected vehicle and the camera. Then, to estimate the actual and projective size of the preceding vehicle, we propose a new estimation method. This method can estimate the range from a preceding vehicle to the camera based on contact points between its tires and the ground and then estimate the actual size of the vehicle according to the positions of its vertexes in the image. Because the projective size of a vehicle varies with respect to its distance to the camera, we also present a simple and rapid method of estimating a vehicle's projective height, which allows a reduction in computational time for size estimation in real-time systems. Errors caused by the application of different camera parameters are also estimated and analyzed in this study. The estimation results are used to determine suitable parameters during camera installation to suppress estimation errors. Finally, to guarantee robustness of the detection system, a new efficient approach to dynamic calibration is presented to obtain accurate camera parameters, even when they are changed by camera vibration owing to on-road driving. Experimental results demonstrate that our approaches can provide accurate and robust estimation results of range and size of target vehicles.
An easy and reliable automated method to estimate oxidative stress in the clinical setting.
Vassalle, Cristina
2008-01-01
During the last few years, reliable and simple tests have been proposed to estimate oxidative stress in vivo. Many of them can be easily adapted to automated analyzers, permitting the simultaneous processing of a large number of samples in a greatly reduced time, avoiding manual sample and reagent handling, and reducing variability sources. In this chapter, description of protocols for the estimation of reactive oxygen metabolites and the antioxidant capacity (respectively the d-ROMs and OXY Adsorbent Test, Diacron, Grosseto, Italy) by using the clinical chemistry analyzer SYNCHRON, CX 9 PRO (Beckman Coulter, Brea, CA, USA) is reported as an example of such an automated procedure that can be applied in the clinical setting. Furthermore, a calculation to compute a global oxidative stress index (Oxidative-INDEX), reflecting both oxidative and antioxidant counterparts, and, therefore, a potentially more powerful parameter, is also described.
Johnson, Nathan H.
This dissertation is concerned with several problems of instrumentation and data analysis encountered by the Apache Point Observatory Lunar Laser-ranging Operation. Chapter 2 considers crosstalk between elements of a single-photon avalanche photodiode detector. Experimental and analytic methods were developed to determine crosstalk rates, and empirical findings are presented. Chapter 3 details electronics developments that have improved the quality of data collected by detectors of the same type. Chapter 4 explores the challenges of estimating gravitational parameters on the basis of ranging data collected by this and other experiments and presents resampling techniques for the derivation of standard errors for estimates of such parameters determined by the Planetary Ephemeris Program (PEP), a solar-system model and data-fitting code. Possible directions for future work are discussed in Chapter 5. A manual of instructions for working with PEP is presented as an appendix.
Enhancing the Communication Range and Reliability of Mobile ADHOC Network Using AODV-OSPF Protocol
Onkar Nath Thakur,
2014-02-01
Full Text Available The increasing density of node and communication range of mobile node raised some problem such as dropping of packet and degraded the performance of network. For the improvement of performance of AODV routing Protocol in mobile ADHOC network various authors used different size of adjacency matrix in AODV routing protocol. In this paper proposed an improved AODV routing protocol using OSPF routing adjacency matrix in AODV protocol. The size of OSPF matrix is large instead of AODV.The change the size of matrix increases the communication range of mobile node. The increased range of communication increases the throughput of mobile ADHOC network. The proposed model simulates in ns-2.34 and compared with AODV routing protocol. Our experimental result shows better performance of AODV-OSPF routing protocol.
Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)
2013-07-01
The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than
Newcomb, Sandra
2010-01-01
Children who are identified as visually impaired frequently have a functional vision assessment as one way to determine how their visual impairment affects their educational performance. The CVI Range is a functional vision assessment for children with cortical visual impairment. The purpose of the study presented here was to examine the…
Newcomb, Sandra
2010-01-01
Children who are identified as visually impaired frequently have a functional vision assessment as one way to determine how their visual impairment affects their educational performance. The CVI Range is a functional vision assessment for children with cortical visual impairment. The purpose of the study presented here was to examine the…
Age estimation from physiological changes of teeth: A reliable age marker?
Nishant Singh
2014-01-01
Full Text Available Background: Age is an essential factor in establishing the identity of a person. Teeth are one of the most durable and resilient part of skeleton. Gustafson (1950 suggested the use of six retrogressive dental changes that are seen with increasing age. Aim: The aim of the study was to evaluate the results and to check the reliability of modified Gustafson′s method for determining the age of an individual. Materials and Methods: Total 70 patients in the age group of 20-65 years, undergoing extraction were included in this present work. The ground sections of extracted teeth were prepared and examined under the microscope. Modified Gustafson′s criteria were used for the estimation of age. Degree of attrition, root translucency, secondary dentin deposition, cementum apposition, and root resorption were measured. A linear regression formula was obtained using different statistical equations in a sample of 70 patients. Results: The mean age difference of total 70 cases studied was ±2.64 years. Difference of actual and calculated age was significant and was observed at 5% level of significance, that is, t-cal > t-tab (t-cal = 7.72. P < 0.05, indicates that the results were statistically significant. Conclusion: The present study concludes that Gustafson′s method is a reliable method for age estimation with some proposed modifications.
Enhancing the Communication Range and Reliability of Mobile ADHOC Network Using AODV-OSPF Protocol
Onkar Nath Thakur,; Amit Saxena
2014-01-01
The increasing density of node and communication range of mobile node raised some problem such as dropping of packet and degraded the performance of network. For the improvement of performance of AODV routing Protocol in mobile ADHOC network various authors used different size of adjacency matrix in AODV routing protocol. In this paper proposed an improved AODV routing protocol using OSPF routing adjacency matrix in AODV protocol. The size of OSPF matrix is large instead of AO...
Reliability of speaking and maximum voice range measures in screening for dysphonia.
Ma, Estella; Robertson, Jennie; Radford, Claire; Vagne, Sarah; El-Halabi, Ruba; Yiu, Edwin
2007-07-01
Speech range profile (SRP) is a graphical display of frequency-intensity occurring interactions during functional speech activity. Few studies have suggested the potential clinical applications of SRP. However, these studies are limited to qualitative case comparisons and vocally healthy participants. The present study aimed to examine the effects of voice disorders on speaking and maximum voice ranges in a group of vocally untrained women. It also aimed to examine whether voice limit measures derived from SRP were as sensitive as those derived from voice range profile (VRP) in distinguishing dysphonic from healthy voices. Ninety dysphonic women with laryngeal pathologies and 35 women with normal voices, who served as controls, participated in this study. Each subject recorded a VRP for her physiological vocal limits. In addition, each subject read aloud the "North Wind and the Sun" passage to record SRP. All the recordings were captured and analyzed by Soundswell's computerized real-time phonetogram Phog 1.0 (Hitech Development AB, Täby, Sweden). The SRPs and the VRPs were compared between the two groups of subjects. Univariate analysis results demonstrated that individual SRP measures were less sensitive than the corresponding VRP measures in discriminating dysphonic from normal voices. However, stepwise logistic regression analyses revealed that the combination of only two SRP measures was almost as effective as a combination of three VRP measures in predicting the presence of dysphonia (overall prediction accuracy: 93.6% for SRP vs 96.0% for VRP). These results suggest that in a busy clinic where quick voice screening results are desirable, SRP can be an acceptable alternate procedure to VRP.
Estimating home-range size: when to include a third dimension?
Monterroso, Pedro; Sillero, Neftalí; Rosalino, Luís Miguel; Loureiro, Filipa; Alves, Paulo Célio
2013-07-01
Most studies dealing with home ranges consider the study areas as if they were totally flat, working only in two dimensions, when in reality they are irregular surfaces displayed in three dimensions. By disregarding the third dimension (i.e., topography), the size of home ranges underestimates the surface actually occupied by the animal, potentially leading to misinterpretations of the animals' ecological needs. We explored the influence of considering the third dimension in the estimation of home-range size by modeling the variation between the planimetric and topographic estimates at several spatial scales. Our results revealed that planimetric approaches underestimate home-range size estimations, which range from nearly zero up to 22%. The difference between planimetric and topographic estimates of home-ranges sizes produced highly robust models using the average slope as the sole independent factor. Moreover, our models suggest that planimetric estimates in areas with an average slope of 16.3° (±0.4) or more will incur in errors ≥5%. Alternatively, the altitudinal range can be used as an indicator of the need to include topography in home-range estimates. Our results confirmed that home-range estimates could be significantly biased when topography is disregarded. We suggest that study areas where home-range studies will be performed should firstly be scoped for its altitudinal range, which can serve as an indicator for the need for posterior use of average slope values to model the surface area used and/or available for the studied animals.
Range estimation by Doppler of multi-line in radiated noise spectrum
WU Guoqing; MA Li
2005-01-01
A method by applying Doppler frequency shift of multi-line in radiated noise spectrum to estimate the vertical range from a receiver to a moving vessel, which is supposed to move along a straight line at a constant velocity, is developed. This method is based on passive ranging by a single sensor and the depth of the sea and other environment parameters are not necessarily known. First the Wigner-Ville distribution is used as the instantaneous frequency estimator to find out the instantaneous frequencies of the muti-lines as a signal. Then define Doppler frequency shift basis functions, based on an algorithm called matching pursuit, by a searching strategy of variable span, and explore the minimum spatial distance between the signal and the Doppler frequency shift basis functions in a five-dimension space. The basis function of the obtained minimum spatial distance corresponds to the estimation of range and speed of the moving vessel. Computer simulations yield statistics errors in the range and speed estimates with differing intensities of noise. If the white noise deviation is less than 10% of the maximum Doppler frequency shift and time-window width is 1.47 times of reference-duration,relative error of range estimate is less than 5.4% and relative error of speed estimate is less than 1.4%. This estimation method has been tested and the result conforms to data collected during an experiment on the sea, the estimated speed is 52 knots and the estimated range is 42 m. The single point passive ranging method can be used for ranging purposes in sonar-buoys, mines,movement analysis of an underwater object in underwater acoustics experiment, and sound source level measurements.
Bouyssy, V.
1996-12-31
For tubular joints of offshore jacket structures, large discrepancies are observed between predicted and measured fatigue damages. The match between predictions and measurements is improved when one performs stochastic fatigue analyses. For a platform in the North Sea, however, it is found that stochastic fatigue life estimates still are inaccurate. The inaccuracy is due to uncertainties in the loading and local resistance and also in the calculation results - e.g. in the structural response and mean damage rate. By means of extensive numerical studies, it is shown how numerical uncertainties can be avoided in the calculation results. Further, it is explained that random fluctuations intrinsic in nature exist in the loading, local resistance and system properties. These random fluctuations can be accounted for in a probabilistic reliability analysis only. Then one computes the probability of a fatigue failure after a given service time instead of predicting a deterministic fatigue life. In the reliability analysis of offshore jacket structures usually only uncertainties in the loading and local resistance are taken into account. For dynamically excited jacket structures, however, stochastic analyses indicate that the influence of uncertainties in structural properties can be significant both with respect to extreme value failure and with respect to fatigue in some cases. A new numerical method is developed to estimate the reliability of offshore structures against both extreme and fatigue failures. The method allows to account for the random fluctuations in the loading, local resistance and structural properties. The suitability of the method to provide accurate estimates of failure probabilities in as few structural analyses as possible is investigated in two case studies representative for a number of offshore structures. (orig.) [Deutsch] Rohrverbindungen bei Meeresplattformen der Nordsee koennen durch Ermuedung versagen. Fuer bestehende Plattformen werden
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Nobuyuki Okahashi
2014-05-01
Full Text Available 13C metabolic flux analysis (MFA is a tool of metabolic engineering for investigation of in vivo flux distribution. A direct 13C enrichment analysis of intracellular free amino acids (FAAs is expected to reduce time for labeling experiments of the MFA. Measurable FAAs should, however, vary among the MFA experiments since the pool sizes of intracellular free metabolites depend on cellular metabolic conditions. In this study, minimal 13C enrichment data of FAAs was investigated to perform the FAAs-based MFA. An examination of a continuous culture of Escherichia coli using 13C-labeled glucose showed that the time required to reach an isotopically steady state for FAAs is rather faster than that for conventional method using proteinogenic amino acids (PAAs. Considering 95% confidence intervals, it was found that the metabolic flux distribution estimated using FAAs has a similar reliability to that of the PAAs-based method. The comparative analysis identified glutamate, aspartate, alanine and phenylalanine as the common amino acids observed in E. coli under different culture conditions. The results of MFA also demonstrated that the 13C enrichment data of the four amino acids is required for a reliable analysis of the flux distribution.
Nielsen, Søren R.K.; Sørensen, John Dalsgaard; Thoft-Christensen, Palle
1983-01-01
A method is presented for life-time reliability' estimates of randomly excited yielding systems, assuming the structure to be safe, when the plastic deformations are confined below certain limits. The accumulated plastic deformations during any single significant loading history are considered...... to be the outcome of identically distributed, independent stochastic variables,for which a model is suggested. Further assuming the interarrival times of the elementary loading histories to be specified by a Poisson process, and the duration of these to be small compared to the designed life-time, the accumulated...... plastic deformation during several loadings can be modelled as a filtered Poisson process. Using the Markov property of this quantity the considered first-passage problem as well as the related extreme distribution problems are then solved numerically, and the results are compared to simulation studies....
Ali Abd Elhakam Aliabdo
2012-09-01
Full Text Available This study aims to investigate the relationships between Schmidt hardness rebound number (RN and ultrasonic pulse velocity (UPV versus compressive strength (fc of stones and bricks. Four types of rocks (marble, pink lime stone, white lime stone and basalt and two types of burned bricks and lime-sand bricks were studied. Linear and non-linear models were proposed. High correlations were found between RN and UPV versus compressive strength. Validation of proposed models was assessed using other specimens for each material. Linear models for each material showed good correlations than non-linear models. General model between RN and compressive strength of tested stones and bricks showed a high correlation with regression coefficient R2 value of 0.94. Estimation of compressive strength for the studied stones and bricks using their rebound number and ultrasonic pulse velocity in a combined method was generally more reliable than using rebound number or ultrasonic pulse velocity only.
Reduced Complexity Angle-Doppler-Range Estimation for MIMO Radar That Employs Compressive Sensing
Yu, Yao; Poor, H Vincent
2009-01-01
The authors recently proposed a MIMO radar system that is implemented by a small wireless network. By applying compressive sensing (CS) at the receive nodes, the MIMO radar super-resolution can be achieved with far fewer observations than conventional approaches. This previous work considered the estimation of direction of arrival and Doppler. Since the targets are sparse in the angle-velocity space, target information can be extracted by solving an l1 minimization problem. In this paper, the range information is exploited by introducing step frequency to MIMO radar with CS. The proposed approach is able to achieve high range resolution and also improve the ambiguous velocity. However, joint angle-Doppler-range estimation requires discretization of the angle-Doppler-range space which causes a sharp rise in the computational burden of the l1 minimization problem. To maintain an acceptable complexity, a technique is proposed to successively estimate angle, Doppler and range in a decoupled fashion. The proposed ...
Ahmed, Sajid
2017-05-12
The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.
Graham, James M.
2006-01-01
Coefficient alpha, the most commonly used estimate of internal consistency, is often considered a lower bound estimate of reliability, though the extent of its underestimation is not typically known. Many researchers are unaware that coefficient alpha is based on the essentially tau-equivalent measurement model. It is the violation of the…
Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System
Ki-Yeong Park
2014-01-01
Full Text Available We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.
Range/velocity limitations for time-domain blood velocity estimation
Jensen, Jørgen Arendt
1993-01-01
The traditional range/velocity limitation for blood velocity estimation systems using ultrasound is elucidated. It is stated that the equation is a property of the estimator used, not the actual physical measurement situation, as higher velocities can be estimated by the time domain cross......-correlation approach. It is demonstrated that the time domain technique under certain measurement conditions will yield unsatisfactory results, when trying to estimate high velocities. Various methods to avoid these artifacts using temporal and spatial clustering techniques are suggested. The improvement...
Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema
2015-01-01
Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.
Reliability analysis of road network for estimation of public evacuation time around NPPs
Bang, Sun-Young; Lee, Gab-Bock; Chung, Yang-Geun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)
2007-07-01
The most strong protection method of radiation emergency preparedness is the evacuation of the public members when a great deal of radioactivity is released to environment. After the Three Mile Island (TMI) nuclear power plant meltdown in the United States and Chernobyl nuclear power plant disaster in the U.S.S.R, many advanced countries including the United States and Japan have continued research on estimation of public evacuation time as one of emergency countermeasure technologies. Also in South Korea, 'Framework Act on Civil Defense: Radioactive Disaster Preparedness Plan' was established in 1983 and nuclear power plants set up a radiation emergency plan and have regularly carried out radiation emergency preparedness trainings. Nonetheless, there is still a need to improve technology to estimate public evacuation time by executing precise analysis of traffic flow to prepare practical and efficient ways to protect the public. In this research, road network for Wolsong and Kori NPPs was constructed by CORSIM code and Reliability analysis of this road network was performed.
Bardot, Leon; McClelland, Elizabeth
2000-10-01
The mode of origin of volcaniclastic deposits can be difficult to determine from field constraints, and the palaeomagnetic technique of emplacement temperature (Te) determination provides a powerful discriminatory test for primary volcanic origin. This technique requires that the low-blocking-temperature (Tb) component of remanence in the direction of the Earth's field in inherited lithic clasts is of thermal origin and was acquired during transport and cooling in a hot pyroclastic flow; otherwise, the Te determination may be inaccurate. If the low-Tb component is not of thermal origin it may be a viscous remanent magnetization (VRM) or a chemical remanent magnetization (CRM). The acquisition of a VRM depends on the duration of exposure to an applied magnetic field, and thus the laboratory unblocking temperature (Tub) of a VRM of a certain age imposes a minimum Te that can be determined for that deposit. Palaeointensity experiments were carried out to assess the magnetic origin (pTRM, CRM, or a combination of both) of the low-Tb component in a number of samples from pyroclastic deposits from Santorini, Greece. Seven of the 24 samples used in these experiments passed the stringent tests for reliable palaointensity determination. These values demonstrated, for six of the samples, that the low-Tb component was of thermal origin and therefore that the estimate of Te was valid. In the other 17 samples, valuable information was gained about the characteristics of the magnetic alteration that occurred during the palaeointensity experiments, allowing assessment of the reliability of Te estimates in these cases. These cases showed that if a CRM is present it has a direction parallel to the applied field, and not parallel to the direction of the parent grain. They also show that, even if a CRM is present, it does not necessarily affect the estimate of Te. Two samples used in these experiments displayed curvature between their two components of magnetization. Data from this
The Importance of the Range Parameter for Estimation and Prediction in Geostatistics
Kaufman, Cari
2011-01-01
Two canonical problems in geostatistics are estimating the parameters in a specified family of stochastic process models and predicting the process at new locations. A number of asymptotic results for these problems over a fixed spatial domain indicate that, for a Gaussian process with Mat\\'ern covariance function, one can fix the range parameter controlling the rate of decay of the process and obtain results that are asymptotically equivalent to the case that the range parameter is known. We discuss why these results do not always provide the appropriate intuition for finite samples. Moreover, we prove that a number of these asymptotic results may be extended to the case that the variance and range parameters are jointly estimated via maximum likelihood or maximum tapered likelihood. Our simulation results show that performance on a variety of metrics is improved and asymptotic approximations are applicable for smaller sample sizes when the range parameter is estimated. These effects are particularly apparen...
Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov [Department of Health and Human Services, Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States); Sak, Mark A.; Bey-Knight, Lisa [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 (United States); Duric, Nebojsa; Littrup, Peter [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 and Delphinus Medical Technologies, 46701 Commerce Center Drive, Plymouth, Michigan 48170 (United States); Ali, Haythem; Vallieres, Patricia [Henry Ford Health System, 2799 W Grand Boulevard, Detroit, Michigan 48202 (United States); Sherman, Mark E. [Division of Cancer Prevention, National Cancer Institute, Department of Health and Human Services, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States)
2015-10-15
Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.
Flávio Chaimowicz
Full Text Available The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil.We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815. Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%.The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.
Novel application of species richness estimators to predict the host range of parasites.
Watson, David M; Milner, Kirsty V; Leigh, Andrea
2017-01-01
Host range is a critical life history trait of parasites, influencing prevalence, virulence and ultimately determining their distributional extent. Current approaches to measure host range are sensitive to sampling effort, the number of known hosts increasing with more records. Here, we develop a novel application of results-based stopping rules to determine how many hosts should be sampled to yield stable estimates of the number of primary hosts within regions, then use species richness estimation to predict host ranges of parasites across their distributional ranges. We selected three mistletoe species (hemiparasitic plants in the Loranthaceae) to evaluate our approach: a strict host specialist (Amyema lucasii, dependent on a single host species), an intermediate species (Amyema quandang, dependent on hosts in one genus) and a generalist (Lysiana exocarpi, dependent on many genera across multiple families), comparing results from geographically-stratified surveys against known host lists derived from herbarium specimens. The results-based stopping rule (stop sampling bioregion once observed host richness exceeds 80% of the host richness predicted using the Abundance-based Coverage Estimator) worked well for most bioregions studied, being satisfied after three to six sampling plots (each representing 25 host trees) but was unreliable in those bioregions with high host richness or high proportions of rare hosts. Although generating stable predictions of host range with minimal variation among six estimators trialled, distribution-wide estimates fell well short of the number of hosts known from herbarium records. This mismatch, coupled with the discovery of nine previously unrecorded mistletoe-host combinations, further demonstrates the limited ecological relevance of simple host-parasite lists. By collecting estimates of host range of constrained completeness, our approach maximises sampling efficiency while generating comparable estimates of the number of primary
Automotive FMCW Radar-enhanced Range Estimation via a Local Resampling Fourier Transform
Cailing Wang
2016-02-01
Full Text Available In complex traffic scenarios, more accurate measurement and discrimination for an automotive frequency-modulated continuous-wave (FMCW radar is required for intelligent robots, driverless cars and driver-assistant systems. A more accurate range estimation method based on a local resampling Fourier transform (LRFT for a FMCW radar is developed in this paper. Radar signal correlation in the phase space sees a higher signal-noise-ratio (SNR to achieve more accurate ranging, and the LRFT - which acts on a local neighbour as a refinement step - can achieve a more accurate target range. The rough range is estimated through conditional pulse compression (PC and then, around the initial rough estimation, a refined estimation through the LRFT in the local region achieves greater precision. Furthermore, the LRFT algorithm is tested in numerous simulations and physical system experiments, which show that the LRFT algorithm achieves a more precise range estimation than traditional FFT-based algorithms, especially for lower bandwidth signals.
张玉卓
1998-01-01
The quantitative evaluation of errors involved in a particular numerical modelling is of prime importance for the effectiveness and reliability of the method. Errors in Distinct Element Modelling are generated mainly through three resources as simplification of physical model, determination of parameters and boundary conditions. A measure of errors which represent the degree of numerical solution 'close to true value' is proposed through fuzzy probability in this paper. The main objective of this paper is to estimate the reliability of Distinct Element Method in rock engineering practice by varying the parameters and boundary conditions. The accumulation laws of standard errors induced by improper determination of parameters and boundary conditions are discussed in delails. Furthermore, numerical experiments are given to illustrate the estimation of fuzzy reliability. Example shows that fuzzy reliability falls between 75%-98% when the relative standard errors of input data is under 10 %.
Carlos Romero Morales
2017-01-01
Full Text Available Background New reliable devices for range of motion (ROM measures in older adults are necessary to improve knowledge about the functional capability in this population. Dorsiflexion ROM limitation is associated with ankle injuries, foot pain, lower limb disorders, loss of balance, gait control disorders and fall risk in older adults. The aim of the present study was to assess the validity and reliability of the Leg Motion device for measuring ankle dorsiflexion ROM in older adults. Methods Adescriptive repeated-measures study was designed to test the reliability of Leg Motion in thirty-three healthy elderly patients older than 65 years. The subjects had to meet the following inclusion and exclusion criteria in their medical records: older than 65 years; no lower extremity injury for at least one year prior to evaluation (meniscopathy, or fractures and any chronic injuries (e.g., osteoarthritis; no previous hip, knee or ankle surgery; no neuropathic alterations and no cognitive conditions (e.g., Alzheimer’s disease or dementia. Participants were recruited through the person responsible for the physiotherapist area from a nursing center. The subjects were evaluated in two different sessions at the same time of day, and there was a break of two weeks between sessions. To test the validity of the Leg Motion system, the participants were measured in a weight-bearing lunge position using a classic goniometer with 1° increments, a smartphone with an inclinometer standard app (iPhone 5S® with 1° increments and a measuring tape that could measure 0.1 cm. All testing was performed while the patients were barefoot. The researcher had ten years of experience as a physiotherapist using goniometer, tape measure and inclinometer devices. Results Mean values and standard deviations were as follows: Leg Motion (right 5.15 ± 3.08; left 5.19 ± 2.98, tape measure (right 5.12 ± 3.08; left 5.12 ± 2.80, goniometer (right 45.87° ± 4.98; left 44
2017-01-01
Background New reliable devices for range of motion (ROM) measures in older adults are necessary to improve knowledge about the functional capability in this population. Dorsiflexion ROM limitation is associated with ankle injuries, foot pain, lower limb disorders, loss of balance, gait control disorders and fall risk in older adults. The aim of the present study was to assess the validity and reliability of the Leg Motion device for measuring ankle dorsiflexion ROM in older adults. Methods Adescriptive repeated-measures study was designed to test the reliability of Leg Motion in thirty-three healthy elderly patients older than 65 years. The subjects had to meet the following inclusion and exclusion criteria in their medical records: older than 65 years; no lower extremity injury for at least one year prior to evaluation (meniscopathy, or fractures) and any chronic injuries (e.g., osteoarthritis); no previous hip, knee or ankle surgery; no neuropathic alterations and no cognitive conditions (e.g., Alzheimer’s disease or dementia). Participants were recruited through the person responsible for the physiotherapist area from a nursing center. The subjects were evaluated in two different sessions at the same time of day, and there was a break of two weeks between sessions. To test the validity of the Leg Motion system, the participants were measured in a weight-bearing lunge position using a classic goniometer with 1° increments, a smartphone with an inclinometer standard app (iPhone 5S®) with 1° increments and a measuring tape that could measure 0.1 cm. All testing was performed while the patients were barefoot. The researcher had ten years of experience as a physiotherapist using goniometer, tape measure and inclinometer devices. Results Mean values and standard deviations were as follows: Leg Motion (right 5.15 ± 3.08; left 5.19 ± 2.98), tape measure (right 5.12 ± 3.08; left 5.12 ± 2.80), goniometer (right 45.87° ± 4.98; left 44.50° ± 5
Estimating the annual range of global illuminance on a vertical south facing building facade
Tijo Joseph, Animesh Dutta
2015-01-01
Full Text Available Towards assessing the daylighting potential for a campus building and in consideration of the recommended strategy of maximizing window exposure on south-facing walls in northern latitudes, the range of global illuminance on a south facing vertical surface at the building location was estimated over an annum, under both clear and cloudy sky conditions, using a calculation methodology proposed by the Illuminating Engineering Society of North America. The illuminance is observed to be a variable over the day with the daily variation estimated to range as high as 35KLx, over the year and under different sky conditions. Overall, it is estimated that the dynamic variation of global illuminance on a south facing façade,specific to the study location, ranges from 14KLx to 100Klx.
Tool for Studying the Effects of Range Restriction in Correlation Coefficient Estimation
1990-07-01
AFHRL-TP-90-6 AIR FORCE TOOL FOR STUDYING THE EFFECTS OF RANGE RESTRICTION IN CORRELATION COEFFICIENT ESTIMATION H U Douglas E. JacksonM Eastern New...the Lftects of kange Restriction in Correlation Coefficient Estimation PE - 62703F PR - 7719 4. AUTHOR(S) TA - 18 Douglas E. Jackson WU - 46 Malcolm J...that one must try to estimate the correlation coefficient between two random variables X and Y in some population P using data taken f-om a
Bioreactance is a reliable method for estimating cardiac output at rest and during exercise.
Jones, T W; Houghton, D; Cassidy, S; MacGowan, G A; Trenell, M I; Jakovljevic, D G
2015-09-01
Bioreactance is a novel noninvasive method for cardiac output measurement that involves analysis of blood flow-dependent changes in phase shifts of electrical currents applied across the thorax. The present study evaluated the test-retest reliability of bioreactance for assessing haemodynamic variables at rest and during exercise. 22 healthy subjects (26 (4) yrs) performed an incremental cycle ergometer exercise protocol relative to their individual power output at maximal O2 consumption (Wmax) on two separate occasions (trials 1 and 2). Participants cycled for five 3 min stages at 20, 40, 60, 80 and 90% Wmax. Haemodynamic and cardiorespiratory variables were assessed at rest and continuously during the exercise protocol. Cardiac output was not significantly different between trials at rest (P=0.948), or between trials at any stage of the exercise protocol (all P>0.30). There was a strong relationship between cardiac output estimates between the trials (ICC=0.95, Prest (P=0.989) or during exercise (all P>0.15), and strong relationships between trials were found (ICC=0.83, Prest and during different stages of graded exercise testing including maximal exertion. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
On the reliable estimation of heat transfer coefficients for nanofluids in a microchannel
Irwansyah, Ridho; Cierpka, Christian; Kähler, Christian J.
2016-09-01
Nanofluids (base fluid and nanoparticles) can enhance the heat transfer coefficient h in comparison to the base fluid. This open the door for the design of efficient cooling system for microelectronics component for instance. Since theoretical Nusselt number correlations for microchannels are not available, the direct method using an energy balance has to be applied to determine h. However, for low nanoparticle concentrations the absolute numbers are small and hard to measure. Therefore, the study examines the laminar convective heat transfer of Al2O3-water nanofluids in a square microchannel with a cross section of 0.5 × 0.5 mm2 and a length of 30 mm under constant wall temperature. The Al2O3 nanoparticles have a diameter size distribution of 30-60 nm. A sensitivity analysis with error propagation was done to reduce the error for a reliable heat transfer coefficient estimation. An enhancement of heat transfer coefficient with increasing nanoparticles volume concentration was confirmed. A maximum enhancement of 6.9% and 21% were realized for 0.6% Al2O3-water and 1% Al2O3-water nanofluids.
Toward reliable automated estimates of earthquake source properties from body wave spectra
Ross, Zachary E.; Ben-Zion, Yehuda
2016-06-01
We develop a two-stage methodology for automated estimation of earthquake source properties from body wave spectra. An automated picking algorithm is used to window and calculate spectra for both P and S phases. Empirical Green's functions are stacked to minimize nongeneric source effects such as directivity and are used to deconvolve the spectra of target earthquakes for analysis. In the first stage, window lengths and frequency ranges are defined automatically from the event magnitude and used to get preliminary estimates of the P and S corner frequencies of the target event. In the second stage, the preliminary corner frequencies are used to update various parameters to increase the amount of data and overall quality of the deconvolved spectral ratios (target event over stacked Empirical Green's function). The obtained spectral ratios are used to estimate the corner frequencies, strain/stress drops, radiated seismic energy, apparent stress, and the extent of directivity for both P and S waves. The technique is applied to data generated by five small to moderate earthquakes in southern California at hundreds of stations. Four of the five earthquakes are found to have significant directivity. The developed automated procedure is suitable for systematic processing of large seismic waveform data sets with no user involvement.
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Mattia Manica
2017-03-01
probability obtained by introducing these estimates in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0 1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. Discussion This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions.
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Manica, Mattia; Rosà, Roberto; Della Torre, Alessandra; Caputo, Beniamino
2017-01-01
in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0 1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions.
From eggs to bites: do ovitrap data provide reliable estimates of Aedes albopictus biting females?
Manica, Mattia; Rosà, Roberto; della Torre, Alessandra
2017-01-01
introducing these estimates in risk models were similar to those based on females/HLC (R0 > 1 in 86% and 40% of sampling dates for Chikungunya and Zika, respectively; R0 1 for Chikungunya is also to be expected when few/no eggs/day are collected by ovitraps. Discussion This work provides the first evidence of the possibility to predict mean number of adult biting Ae. albopictus females based on mean number of eggs and to compute the threshold of eggs/ovitrap associated to epidemiological risk of arbovirus transmission in the study area. Overall, however, the large confidence intervals in the model predictions represent a caveat regarding the reliability of monitoring schemes based exclusively on ovitrap collections to estimate numbers of biting females and plan control interventions. PMID:28321362
Automotive FMCW Radar-enhanced Range Estimation via a Local Resampling Fourier Transform
2016-01-01
In complex traffic scenarios, more accurate measurement and discrimination for an automotive frequency-modulated continuous-wave (FMCW) radar is required for intelligent robots, driverless cars and driver-assistant systems. A more accurate range estimation method based on a local resampling Fourier transform (LRFT) for a FMCW radar is developed in this paper. Radar signal correlation in the phase space sees a higher signal-noise-ratio (SNR) to achieve more accurate ranging, and the LRFT - whi...
Luminescence imaging of water during carbon-ion irradiation for range estimation.
Yamamoto, Seiichi; Komori, Masataka; Akagi, Takashi; Yamashita, Tomohiro; Koyama, Shuji; Morishita, Yuki; Sekihara, Eri; Toshito, Toshiyuki
2016-05-01
The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions with those of a phantom containing pure-water. The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom's luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.
Shen, F.; Verhoef, W.; Zhou, Y.; Salama, M.S.; Liu, X.
2010-01-01
The Changjiang (Yangtze) estuarine and coastal waters are characterized by suspended sediments over a wide range of concentrations from 20 to 2,500 mg l-1. Suspended sediment plays important roles in the estuarine and coastal system and environment. Previous algorithms for satellite estimates of sus
Estimating Rigid Transformation Between Two Range Maps Using Expectation Maximization Algorithm
Zeng, Shuqing
2012-01-01
We address the problem of estimating a rigid transformation between two point sets, which is a key module for target tracking system using Light Detection And Ranging (LiDAR). A fast implementation of Expectation-maximization (EM) algorithm is presented whose complexity is O(N) with $N$ the number of scan points.
Shen, F.; Verhoef, W.; Zhou, Y.; Salama, M.S.; Liu, X.
2010-01-01
The Changjiang (Yangtze) estuarine and coastal waters are characterized by suspended sediments over a wide range of concentrations from 20 to 2,500 mg l-1. Suspended sediment plays important roles in the estuarine and coastal system and environment. Previous algorithms for satellite estimates of
An extended set-value observer for position estimation using single range measurements
Marcal, Jose; Jouffroy, Jerome; Fossen, Thor I.
of transponders. The knowledge of the bearing of the vehicle and the range measurements from a single location can provide a solution which is sensitive to the trajectory that the vehicle is following, since there is no complete constraint on the position estimate with a single beacon. In this paper...
An extended set-value observer for position estimation using single range measurements
Marcal, Jose; Jouffroy, Jerome; Fossen, Thor I.
of transponders. The knowledge of the bearing of the vehicle and the range measurements from a single location can provide a solution which is sensitive to the trajectory that the vehicle is following, since there is no complete constraint on the position estimate with a single beacon. In this paper...
Generic methodology for driving range estimation of electric vehicle with on-road charging
Shekhar, A.; Prasanth, V.; Bauer, P.; Bolech, M.
2015-01-01
An analytical estimation of driving range of electric vehicles (EVs) with contactIess on-road charging system is presented in this paper. Inductive power transfer (IPT) systems with different configurations (static, dynamic), power levels and road coverage have different (and non-linear) impact on t
A three-step vehicle detection framework for range estimation using a single camera
Kanjee, R
2015-12-01
Full Text Available Symposium Series on Computational Intelligence 2015, Cape Town, 8-10 December 2015 A Three-Step Vehicle Detection Framework for Range Estimation Using a Single Camera Ritesh Kanjee Optronic Sensor Systems Defence, Peace, Safety and Security Council...
Cumulant-Based Coherent Signal Subspace Method for Bearing and Range Estimation
Bourennane Salah
2007-01-01
Full Text Available A new method for simultaneous range and bearing estimation for buried objects in the presence of an unknown Gaussian noise is proposed. This method uses the MUSIC algorithm with noise subspace estimated by using the slice fourth-order cumulant matrix of the received data. The higher-order statistics aim at the removal of the additive unknown Gaussian noise. The bilinear focusing operator is used to decorrelate the received signals and to estimate the coherent signal subspace. A new source steering vector is proposed including the acoustic scattering model at each sensor. Range and bearing of the objects at each sensor are expressed as a function of those at the first sensor. This leads to the improvement of object localization anywhere, in the near-field or in the far-field zone of the sensor array. Finally, the performances of the proposed method are validated on data recorded during experiments in a water tank.
Anderson, Katherine H.; Bartlein, Patrick J.; Strickland, Laura E.; Pelltier, Richard T.; Thompson, Robert S.; Shafer, Sarah L.
2012-01-01
the MCRun technique provides reliable and unbiased estimates of the ranges of possible climatic conditions that can reasonably be associated with these assemblages. The application of MCRwt and MAT approaches can further constrain these estimates and may provide a systematic way to assess uncertainty. The data sets required for MCR analyses in North America are provided in a parallel publication.
MacDonell, Christopher William; Ivanova, Tanya Dimitrova; Garland, S Jayne
2007-05-15
The reliability of the afterhyperpolarization (AHP) time course, as estimated by the interval death rate (IDR) analysis was evaluated both within and between investigators. The IDR analysis uses the firing history of a single motor unit train at low tonic firing rates to calculate an estimate of the AHP time course [Matthews PB. Relationship of firing intervals of human motor units to the trajectory of post-spike after-hyperpolarization and synaptic noise. J Physiol 1996;492:597-628]. Single motor unit trains were collected from the tibialis anterior (TA) to determine intra-rater reliability (within investigator). Data from the first dorsal interosseus (FDI), collected in a previous investigation [Gossen ER, Ivanova TD, Garland SJ. The time course of the motoneurone afterhyperpolarization is related to motor unit twitch speed in human skeletal muscle. J Physiol 2003;552:657-64], were used to examine the inter-rater reliability (between investigators). The lead author was blinded to the original time constants and file identities for the re-analysis. The intra-rater reliability of the AHP time constant in the TA data was high (r(2)=0.88; pFDI data was also strong (r(2)=0.92; pFDI. It is concluded that the interval death rate analysis is a reliable tool for estimating the AHP time course with experienced investigators.
Robustness of Estimators of Long-Range Dependence and Self-Similarity under non-Gaussianity
Franzke, Christian L E; Watkins, Nicholas W; Gramacy, Robert B; Hughes, Cecilia
2011-01-01
Long-range dependence and non-Gaussianity are ubiquitous in many natural systems like ecosystems, biological systems and climate. However, it is not always appreciated that both phenomena usually occur together in natural systems and that the superposition of both phenomena constitute the self-similarity of a system. These features, which are common in complex systems, impact the attribution of trends and the occurrence and clustering of extremes. The risk assessment of systems with these properties will lead to different outcomes (e.g. return periods) than the more common assumption of independence of extremes. Two paradigmatic models are discussed which can simultaneously account for long-range dependence and non-Gaussianity: Autoregressive Fractional Integrated Moving Average (ARFIMA) and Linear Fractional Stable Motion (LFSM). Statistical properties of estimators for long-range dependence and self-similarity are critically assessed. It is found that the most popular estimators are not robust. In particula...
Jiménez-Alfaro, Borja; Draper, David; Nogues, David Bravo
2012-01-01
resolution. To illustrate the ability of fine-resolution species distribution models for obtaining new measures of species ranges and their impact in conservation planning, we estimate the potential AOO of an endangered species in alpine environments. We use field occurrences of relict Empetrum nigrum......Area of Occupancy (AOO), is a measure of species geographical ranges commonly used for species red listing. In most cases, AOO is estimated using reported localities of species distributions at coarse grain resolution, providing measures subjected to uncertainties of data quality and spatial...... Area (MPA). As defined here, the potential AOO provides spatially-explicit measures of species ranges which are permanent in the time and scarcely affected by sampling bias. The overestimation of these measures may be reduced using higher thresholds of habitat suitability, but standard rules as the MPA...
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Nussbaumer Silvio
2010-08-01
Full Text Available Abstract Background The aims of this study were to evaluate the construct validity (known group, concurrent validity (criterion based and test-retest (intra-rater reliability of manual goniometers to measure passive hip range of motion (ROM in femoroacetabular impingement patients and healthy controls. Methods Passive hip flexion, abduction, adduction, internal and external rotation ROMs were simultaneously measured with a conventional goniometer and an electromagnetic tracking system (ETS on two different testing sessions. A total of 15 patients and 15 sex- and age-matched healthy controls participated in the study. Results The goniometer provided greater hip ROM values compared to the ETS (range 2.0-18.9 degrees; P P Conclusions The present study suggests that goniometer-based assessments considerably overestimate hip joint ROM by measuring intersegmental angles (e.g., thigh flexion on trunk for hip flexion rather than true hip ROM. It is likely that uncontrolled pelvic rotation and tilt due to difficulties in placing the goniometer properly and in performing the anatomically correct ROM contribute to the overrating of the arc of these motions. Nevertheless, conventional manual goniometers can be used with confidence for longitudinal assessments in the clinic.
Huang, Liping; Crino, Michelle; Wu, Jason Hy
2016-01-01
BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean p...
Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P
2017-03-01
In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method.
Annegret Grimm
Full Text Available Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK. If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2. Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to
An Algorithm for Joint Estimating Range, DOA and Frequency of Near-Field Sources
CHENJianfeng; ZHANGXianda; WUYuntao
2004-01-01
Most of the existing eigen-decomposition based localization methods make the plane wave assumption in order to estimate the signal parameters of multiple sources. This paper proposes a novel joint algorithm for ranges, DOA's and frequencies of multiple narrow-band sources with the sources in the near field. This algorithm uses the Estimating signal parameters via rotational invariance technique (ESPRIT) based on cumulant-domain signal subspaces. The new algorithm does not require searching spectral peak or pairing among parameters, and has the power to resist an additive Gaussian noise due to use of the fourth-order cumulants. The performance of the new algorithm is confirmed by computer simulations.
Field Oriented Control for Rotor Position Estimation of IPM Drives over a Wide Speed Range
Ekhlas Kadhum
2013-01-01
Full Text Available Field oriented control strategy of Interior Permanent Magnet IPM Synchronous Motor drives over a wide speed range applications is presented. Rotor position estimation using model reference adaptive system method for IPM Drive without using a mechanical sensor is illustrated considering the effects of cross-saturation between the d and q axes. The cross saturation between d and q axes has been calculated by finite-element analysis. The inductance measurement regards the cross saturation which is used to obtain the suitable id - characteristics in base and flux weakening regions. The simulation results show that rotor position estimation error accuracy was improved. Various dynamic conditions have been investigated
A New Algorithm for Joint Range-DOA-Frequency Estimation of Near-Field Sources
Jian-Feng Chen
2004-03-01
Full Text Available This paper studies the joint estimation problem of ranges, DOAs, and frequencies of near-field narrowband sources and proposes a new computationally efficient algorithm, which employs a symmetric uniform linear array, uses eigenvalues together with the corresponding eigenvectors of two properly designed matrices to estimate signal parameters, and does not require searching for spectral peak or pairing among parameters. In addition, the proposed algorithm can be applied in arbitrary Gaussian noise environment since it is based on the fourth-order cumulants, which is verified by extensive computer simulations.
Rhiel, G Steven
2010-02-01
In 2007, Rhiel presented a technique to estimate the coefficient of variation from the range when sampling from skewed distributions. To provide an unbiased estimate, a correction factor (a(n)) for the mean was included. Numerical correction factors for a number of skewed distributions were provided. In a follow-up paper, he provided a proof he claimed showed the correction factor was independent of the mean and standard deviation, making the factors useful as these parameters vary; however, that proof did not establish independence. Herein is a proof which establishes the independence.
Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey
2015-01-01
Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…
Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey
2015-01-01
Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…
Ratnayake, M; Obertová, Z; Dose, M; Gabriel, P; Bröker, H M; Brauckmann, M; Barkus, A; Rizgeliene, R; Tutkuviene, J; Ritz-Timme, S; Marasciuolo, L; Gibelli, D; Cattaneo, C
2014-09-01
In cases of suspected child pornography, the age of the victim represents a crucial factor for legal prosecution. The conventional methods for age estimation provide unreliable age estimates, particularly if teenage victims are concerned. In this pilot study, the potential of age estimation for screening purposes is explored for juvenile faces. In addition to a visual approach, an automated procedure is introduced, which has the ability to rapidly scan through large numbers of suspicious image data in order to trace juvenile faces. Age estimations were performed by experts, non-experts and the Demonstrator of a developed software on frontal facial images of 50 females aged 10-19 years from Germany, Italy, and Lithuania. To test the accuracy, the mean absolute error (MAE) between the estimates and the real ages was calculated for each examiner and the Demonstrator. The Demonstrator achieved the lowest MAE (1.47 years) for the 50 test images. Decreased image quality had no significant impact on the performance and classification results. The experts delivered slightly less accurate MAE (1.63 years). Throughout the tested age range, both the manual and the automated approach led to reliable age estimates within the limits of natural biological variability. The visual analysis of the face produces reasonably accurate age estimates up to the age of 18 years, which is the legally relevant age threshold for victims in cases of pedo-pornography. This approach can be applied in conjunction with the conventional methods for a preliminary age estimation of juveniles depicted on images.
Tobias K. Kohoutek
2013-02-01
Full Text Available We present a novel approach for autonomous location estimation and navigation in indoor environments using range images and prior scene knowledge from a GIS database (CityGML. What makes this task challenging is the arbitrary relative spatial relation between GIS and Time-of-Flight (ToF range camera further complicated by a markerless configuration. We propose to estimate the camera’s pose solely based on matching of GIS objects and their detected location in image sequences. We develop a coarse-to-fine matching strategy that is able to match point clouds without any initial parameters. Experiments with a state-of-the-art ToF point cloud show that our proposed method delivers an absolute camera position with decimeter accuracy, which is sufficient for many real-world applications (e.g., collision avoidance.
Precision and shortcomings of yaw error estimation using spinner-based light detection and ranging
Kragh, Knud Abildgaard; Hansen, Morten Hartvig; Mikkelsen, Torben
2013-01-01
was developed and tested. In this study, the simulation parameter space is extended to include higher levels of turbulence intensity. Furthermore, the method is applied to experimental data and compared with met-mast data corrected for a calibration error that was not discovered during previous work. Finally......, the shortcomings of using a spinner mounted LIDAR for yaw error estimation are discussed. The extended simulation study shows that with the applied method, the yaw error can be estimated with a precision of a few degrees, even in highly turbulent flows. Applying the method to experimental data reveals an average......When extracting energy from the wind using horizontal axis wind turbines, the ability to align the rotor axis with the mean wind direction is crucial. In previous work, a method for estimating the yaw error based on measurements from a spinner mounted light detection and ranging (LIDAR) device...
System Estimation of Panel Data Models under Long-Range Dependence
Ergemen, Yunus Emre
A general dynamic panel data model is considered that incorporates individual and interactive fixed effects allowing for contemporaneous correlation in model innovations. The model accommodates general stationary or nonstationary long-range dependence through interactive fixed effects...... and innovations, removing the necessity to perform a priori unit-root or stationarity testing. Moreover, persistence in innovations and interactive fixed effects allows for cointegration; innovations can also have vector-autoregressive dynamics; deterministic trends can be featured. Estimations are performed...
Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution
2010-09-01
1√ 2 )] = K ∑ k=1 [−(dk − Apk (R)− B)2 22 + ln ( 1√ 2 )] (4.4) Because the range and amplitude are both unknown parameters, the estimation...setting it equal to zero results in K ∑ k=1 [ 2(dk − Apk (R)− B) 22 ] pk(R) = 0 (4.6) where the term that doesn’t depend on A has been dropped
Uncertainty-based Estimation of the Secure Range for ISO New England Dynamic Interchange Adjustment
Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di; Hou, Zhangshuan; Sun, Yannan; Maslennikov, S.; Luo, Xiaochuan; Zheng, T.; George, S.; Knowland, T.; Litvinov, E.; Weaver, S.; Sanchez, E.
2014-04-14
The paper proposes an approach to estimate the secure range for dynamic interchange adjustment, which assists system operators in scheduling the interchange with neighboring control areas. Uncertainties associated with various sources are incorporated. The proposed method is implemented in the dynamic interchange adjustment (DINA) tool developed by Pacific Northwest National Laboratory (PNNL) for ISO New England. Simulation results are used to validate the effectiveness of the proposed method.
A Maximum a Posteriori Estimation Framework for Robust High Dynamic Range Video Synthesis.
Li, Yuelong; Lee, Chul; Monga, Vishal
2017-03-01
High dynamic range (HDR) image synthesis from multiple low dynamic range exposures continues to be actively researched. The extension to HDR video synthesis is a topic of significant current interest due to potential cost benefits. For HDR video, a stiff practical challenge presents itself in the form of accurate correspondence estimation of objects between video frames. In particular, loss of data resulting from poor exposures and varying intensity makes conventional optical flow methods highly inaccurate. We avoid exact correspondence estimation by proposing a statistical approach via maximum a posterior estimation, and under appropriate statistical assumptions and choice of priors and models, we reduce it to an optimization problem of solving for the foreground and background of the target frame. We obtain the background through rank minimization and estimate the foreground via a novel multiscale adaptive kernel regression technique, which implicitly captures local structure and temporal motion by solving an unconstrained optimization problem. Extensive experimental results on both real and synthetic data sets demonstrate that our algorithm is more capable of delivering high-quality HDR videos than current state-of-the-art methods, under both subjective and objective assessments. Furthermore, a thorough complexity analysis reveals that our algorithm achieves better complexity-performance tradeoff than conventional methods.
Forde, David R.; Baron, Stephen W.; Scher, Christine D.; Stein, Murray B.
2012-01-01
This study examines the psychometric properties of the Childhood Trauma Questionnaire short form (CTQ-SF) with street youth who have run away or been expelled from their homes (N = 397). Internal reliability coefficients for the five clinical scales ranged from 0.65 to 0.95. Confirmatory Factor Analysis (CFA) was used to test the five-factor…
Robustness of Estimators of Long-Range Dependence and Self-Similarity under non-Gaussianity
Franzke, C.; Watkins, N. W.; Graves, T.; Gramacy, R.; Hughes, C.
2011-12-01
Long-range dependence and non-Gaussianity are ubiquitous in many natural systems like ecosystems, biological systems and climate. However, it is not always appreciated that both phenomena may occur together in natural systems and that self-similarity in a system can be a superposition of both phenomena. These features, which are common in complex systems, impact the attribution of trends and the occurrence and clustering of extremes. The risk assessment of systems with these properties will lead to different outcomes (e.g. return periods) than the more common assumption of independence of extremes. Two paradigmatic models are discussed which can simultaneously account for long-range dependence and non-Gaussianity: Autoregressive Fractional Integrated Moving Average (ARFIMA) and Linear Fractional Stable Motion (LFSM). Statistical properties of estimators for long-range dependence and self-similarity are critically assessed. It is found that the most popular estimators can be biased in the presence of important features of many natural systems like trends and multiplicative noise. Also the long-range dependence and non-Gaussianity of two typical natural time series are discussed.
A simplified Excel® algorithm for estimating the least limiting water range of soils
Leão Tairone Paiva
2004-01-01
Full Text Available The least limiting water range (LLWR of soils has been employed as a methodological approach for evaluation of soil physical quality in different agricultural systems, including forestry, grasslands and major crops. However, the absence of a simplified methodology for the quantification of LLWR has hampered the popularization of its use among researchers and soil managers. Taking this into account this work has the objective of proposing and describing a simplified algorithm developed in Excel® software for quantification of the LLWR, including the calculation of the critical bulk density, at which the LLWR becomes zero. Despite the simplicity of the procedures and numerical techniques of optimization used, the nonlinear regression produced reliable results when compared to those found in the literature.
Su, G; Guldbrandtsen, B; Gregersen, V R
2010-01-01
were available. In the analysis, all SNP were fitted simultaneously as random effects in a Bayesian variable selection model, which allows heterogeneous variances for different SNP markers. The response variables were the official EBV. Direct GEBV were calculated as the sum of individual SNP effects...... for all 18 index traits. Reliability of GEBV was assessed by squared correlation between GEBV and conventional EBV (r2GEBV, EBV), and expected reliability was obtained from prediction error variance using a 5-fold cross validation. Squared correlations between GEBV and published EBV (without any...... that genomic selection can greatly improve the accuracy of preselection for young bulls compared with traditional selection based on parent average information....
Traveling-wave tube reliability estimates, life tests, and space flight experience
Lalli, V. R.; Speck, C. E.
1977-01-01
Infant mortality, useful life, and wearout phase of twt life are considered. The performance of existing developmental tubes, flight experience, and sequential hardware testing are evaluated. The reliability history of twt's in space applications is documented by considering: (1) the generic parts of the tube in light of the manner in which their design and operation affect the ultimate reliability of the device, (2) the flight experience of medium power tubes, and (3) the available life test data for existing space-qualified twt's in addition to those of high power devices.
Hughes, C; Adlam, A; Happé, F; Jackson, J; Taylor, A; Caspi, A
2000-05-01
Although tests of young children's understanding of mind have had a remarkable impact upon developmental and clinical psychological research over the past 20 years, very little is known about their reliability. Indeed, the only existing study of test-retest reliability suggests unacceptably poor results for first-order false-belief tasks (Mayes, Klin, Tercyak, Cicchetti, & Cohen, 1996), although this may in part reflect the nonstandard (video-based) procedures adopted by these authors. The present study had four major aims. The first was to re-examine the reliability of false-belief tasks, using more standard (puppet and storybook) procedures. The second was to assess whether the test-retest reliability of false-belief task performance is equivalent for children of contrasting ability levels. The third aim was to explore whether adopting an aggregate approach improves the reliability with which children's early mental-state awareness can be measured. The fourth aim was to examine for the first time the test-retest reliability of children's performances on more advanced theory-of-mind tasks. Our results suggest that most standard and advanced false-belief tasks do in fact show good test-retest reliability and internal consistency, with very strong test-retest correlations between aggregate scores for children of all levels of ability.
Ren, Yihui; Eubank, Stephen; Nath, Madhurima
2016-10-01
Network reliability is the probability that a dynamical system composed of discrete elements interacting on a network will be found in a configuration that satisfies a particular property. We introduce a reliability property, Ising feasibility, for which the network reliability is the Ising model's partition function. As shown by Moore and Shannon, the network reliability can be separated into two factors: structural, solely determined by the network topology, and dynamical, determined by the underlying dynamics. In this case, the structural factor is known as the joint density of states. Using methods developed to approximate the structural factor for other reliability properties, we simulate the joint density of states, yielding an approximation for the partition function. Based on a detailed examination of why naïve Monte Carlo sampling gives a poor approximation, we introduce a parallel scheme for estimating the joint density of states using a Markov-chain Monte Carlo method with a spin-exchange random walk. This parallel scheme makes simulating the Ising model in the presence of an external field practical on small computer clusters for networks with arbitrary topology with ˜106 energy levels and more than 10308 microstates.
Zeeshan Ali Siddiqui
2016-01-01
Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.
The home-range concept: are traditional estimators still relevant with modern telemetry technology?
Kie, John G; Matthiopoulos, Jason; Fieberg, John; Powell, Roger A; Cagnacci, Francesca; Mitchell, Michael S; Gaillard, Jean-Michel; Moorcroft, Paul R
2010-07-27
Recent advances in animal tracking and telemetry technology have allowed the collection of location data at an ever-increasing rate and accuracy, and these advances have been accompanied by the development of new methods of data analysis for portraying space use, home ranges and utilization distributions. New statistical approaches include data-intensive techniques such as kriging and nonlinear generalized regression models for habitat use. In addition, mechanistic home-range models, derived from models of animal movement behaviour, promise to offer new insights into how home ranges emerge as the result of specific patterns of movements by individuals in response to their environment. Traditional methods such as kernel density estimators are likely to remain popular because of their ease of use. Large datasets make it possible to apply these methods over relatively short periods of time such as weeks or months, and these estimates may be analysed using mixed effects models, offering another approach to studying temporal variation in space-use patterns. Although new technologies open new avenues in ecological research, our knowledge of why animals use space in the ways we observe will only advance by researchers using these new technologies and asking new and innovative questions about the empirical patterns they observe.
Kori Blankenship
2015-04-01
Full Text Available Reference ecological conditions offer important context for land managers as they assess the condition of their landscapes and provide benchmarks for desired future conditions. State-and-transition simulation models (STSMs are commonly used to estimate reference conditions that can be used to evaluate current ecosystem conditions and to guide land management decisions and activities. The LANDFIRE program created more than 1,000 STSMs and used them to assess departure from a mean reference value for ecosystems in the United States. While the mean provides a useful benchmark, land managers and researchers are often interested in the range of variability around the mean. This range, frequently referred to as the historical range of variability (HRV, offers model users improved understanding of ecosystem function, more information with which to evaluate ecosystem change and potentially greater flexibility in management options. We developed a method for using LANDFIRE STSMs to estimate the HRV around the mean reference condition for each model state in ecosystems by varying the fire probabilities. The approach is flexible and can be adapted for use in a variety of ecosystems. HRV analysis can be combined with other information to help guide complex land management decisions.
R&D program benefits estimation: DOE Office of Electricity Delivery and Energy Reliability
None, None
2006-12-04
The overall mission of the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE) is to lead national efforts to modernize the electric grid, enhance the security and reliability of the energy infrastructure, and facilitate recovery from disruptions to the energy supply. In support of this mission, OE conducts a portfolio of research and development (R&D) activities to advance technologies to enhance electric power delivery. Multiple benefits are anticipated to result from the deployment of these technologies, including higher quality and more reliable power, energy savings, and lower cost electricity. In addition, OE engages State and local government decision-makers and the private sector to address issues related to the reliability and security of the grid, including responding to national emergencies that affect energy delivery. The OE R&D activities are comprised of four R&D lines: High Temperature Superconductivity (HTS), Visualization and Controls (V&C), Energy Storage and Power Electronics (ES&PE), and Distributed Systems Integration (DSI).
Wilson, Celia M.
2010-01-01
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Wilson, Celia M.
2010-01-01
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Ong, M M; Kihara, R; Zentler, J M; Kreitzer, B R; DeHope, W J
2007-06-27
At Lawrence Livermore National Laboratory (LLNL), our flash X-ray accelerator (FXR) is used on multi-million dollar hydrodynamic experiments. Because of the importance of the radiographs, FXR must be ultra-reliable. Flash linear accelerators that can generate a 3 kA beam at 18 MeV are very complex. They have thousands, if not millions, of critical components that could prevent the machine from performing correctly. For the last five years, we have quantified and are tracking component failures. From this data, we have determined that the reliability of the high-voltage gas-switches that initiate the pulses, which drive the accelerator cells, dominates the statistics. The failure mode is a single-switch pre-fire that reduces the energy of the beam and degrades the X-ray spot-size. The unfortunate result is a lower resolution radiograph. FXR is a production machine that allows only a modest number of pulses for testing. Therefore, reliability switch testing that requires thousands of shots is performed on our test stand. Study of representative switches has produced pre-fire statistical information and probability distribution curves. This information is applied to FXR to develop test procedures and determine individual switch reliability using a minimal number of accelerator pulses.
Clarifying the Blurred Image: Estimating the Inter-Rater Reliability of Performance Assessments.
Moore, Alan D.; Young, Suzanne
As schools move toward performance assessment, there is increasing discussion of using these assessments for accountability purposes. When used for making decisions, performance assessments must meet high standards of validity and reliability. One major source of unreliability in performance assessments is interrater disagreement. In this paper,…
Jiménez-Alfaro, Borja; Draper, David; Nogues, David Bravo
2012-01-01
and maximum entropy modeling to assess whether different sampling (expert versus systematic surveys) may affect AOO estimates based on habitat suitability maps, and the differences between such measurements and traditional coarse-grid methods. Fine-scale models performed robustly and were not influenced...... by survey protocols, providing similar habitat suitability outputs with high spatial agreement. Model-based estimates of potential AOO were significantly smaller than AOO measures obtained from coarse-scale grids, even if the first were obtained from conservative thresholds based on the Minimal Predicted...... Area (MPA). As defined here, the potential AOO provides spatially-explicit measures of species ranges which are permanent in the time and scarcely affected by sampling bias. The overestimation of these measures may be reduced using higher thresholds of habitat suitability, but standard rules as the MPA...
CT scan range estimation using multiple body parts detection: let PACS learn the CT image content.
Wang, Chunliang; Lundström, Claes
2016-02-01
The aim of this study was to develop an efficient CT scan range estimation method that is based on the analysis of image data itself instead of metadata analysis. This makes it possible to quantitatively compare the scan range of two studies. In our study, 3D stacks are first projected to 2D coronal images via a ray casting-like process. Trained 2D body part classifiers are then used to recognize different body parts in the projected image. The detected candidate regions go into a structure grouping process to eliminate false-positive detections. Finally, the scale and position of the patient relative to the projected figure are estimated based on the detected body parts via a structural voting. The start and end lines of the CT scan are projected to a standard human figure. The position readout is normalized so that the bottom of the feet represents 0.0, and the top of the head is 1.0. Classifiers for 18 body parts were trained using 184 CT scans. The final application was tested on 136 randomly selected heterogeneous CT scans. Ground truth was generated by asking two human observers to mark the start and end positions of each scan on the standard human figure. When compared with the human observers, the mean absolute error of the proposed method is 1.2% (max: 3.5%) and 1.6% (max: 5.4%) for the start and end positions, respectively. We proposed a scan range estimation method using multiple body parts detection and relative structure position analysis. In our preliminary tests, the proposed method delivered promising results.
Haroldson, Mark A.; Schwartz, Charles C.; , Daniel D. Bjornlie; , Daniel J. Thompson; , Kerry A. Gunther; , Steven L. Cain; , Daniel B. Tyers; Frey, Kevin L.; Aber, Bryan C.
2014-01-01
The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.
Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F
2012-07-07
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Sun, Haitao
2015-07-09
The thermally activated delayed fluorescence (TADF) mechanism has recently attracted much interest in the field of organic light-emitting diodes (OLEDs). TADF relies on the presence of a very small energy gap between the lowest singlet and triplet excited states. Here, we demonstrate that time-dependent density functional theory (TD-DFT) in the Tamm-Dancoff Approximation can be very successful in the calculations of the lowest singlet and triplet excitation energies and the corresponding singlet-triplet gap when using nonempirically tuned range-separated functionals. Such functionals provide very good estimates in a series of 17 molecules used in TADF-based OLED devices, with mean absolute deviations of 0.15 eV for the vertical singlet excitation energies and 0.09 eV [0.07 eV] for the adiabatic [vertical] singlet-triplet energy gaps as well as low relative errors and high correlation coefficients compared to the corresponding experimental values. They significantly outperform conventional functionals, a feature which is rationalized on the basis of the amount of exact-exchange included and the delocalization error. The present work provides a reliable theoretical tool for the prediction and development of novel TADF-based materials with low singlet-triplet energetic splittings.
Equation reliability of soil ingestion estimates in mass-balance soil ingestion studies.
Stanek Iii, Edward J; Xu, Bo; Calabrese, Edward J
2012-03-01
Exposure to chemicals from ingestion of contaminated soil may be an important pathway with potential health consequences for children. A key parameter used in assessing this exposure is the quantity of soil ingested, with estimates based on four short longitudinal mass-balance soil ingestion studies among children. The estimates use trace elements in the soil with low bioavailability that are minimally present in food. Soil ingestion corresponds to the excess trace element amounts excreted, after subtracting trace element amounts ingested from food and medications, expressed as an equivalent quantity of soil. The short duration of mass-balance studies, different concentrations of trace elements in food and soil, and potential for trace elements to be ingested from other nonsoil, nonfood sources contribute to variability and bias in the estimates. We develop a stochastic model for a soil ingestion estimator based on a trace element that accounts for critical features of the mass-balance equation. Using results from four mass-balance soil ingestion studies, we estimate the accuracy of soil ingestion estimators for different trace elements, and identify subjects where the difference between Al and Si estimates is larger (>3 RMSE) than expected. Such large differences occur in fewer than 12% of subjects in each of the four studies. We recommend the use of such criteria to flag and exclude subjects from soil ingestion analyses. © 2011 Society for Risk Analysis.
Williamson, Laura D; Brookes, Kate L; Scott, Beth E; Graham, Isla M; Bradbury, Gareth; Hammond, Philip S; Thompson, Paul M; McPherson, Jana
2016-01-01
...‐based visual surveys. Surveys of cetaceans using acoustic loggers or digital cameras provide alternative methods to estimate relative density that have the potential to reduce cost and provide a verifiable record of all detections...
Aghamousa, Amir
2014-01-01
The observable time delays between the multiple images of strong lensing systems with time variable sources can provide us with some valuable information to probe the expansion history of the Universe. Estimation of these time delays can be very challenging due to complexities of the observed data where there are seasonal gaps, various noises and systematics such as unknown microlensing effects. In this paper we introduce a novel approach to estimate the time delays for strong lensing systems implementing various statistical methods of data analysis including the method of smoothing and cross-correlation. The method we introduce in this paper has been recently used in TDC0 and TDC1 Strong Lens Time Delay Challenges and has shown its power in reliable and precise estimation of time delays dealing with data with different complexities.
Thie, Johnson; Sriram, Prema; Klistorner, Alexander; Graham, Stuart L
2012-01-01
This paper describes a method to reliably estimate latency of multifocal visual evoked potential (mfVEP) and a classifier to automatically separate reliable mfVEP traces from noisy traces. We also investigated which mfVEP peaks have reproducible latency across recording sessions. The proposed method performs cross-correlation between mfVEP traces and second order Gaussian wavelet kernels and measures the timing of the resulting peaks. These peak times offset by the wavelet kernel's peak time represents the mfVEP latency. The classifier algorithm performs an exhaustive series of leave-one-out classifications to find the champion mfVEP features which are most frequently selected to infer reliable traces from noisy traces. Monopolar mfVEP recording was performed on 10 subjects using the Accumap1™ system. Pattern-reversal protocol was used with 24 sectors and eccentricity upto 33°. A bipolar channel was recorded at midline with electrodes placed above and below the inion. The largest mfVEP peak and the immediate peak prior had the smallest latency variability across recording sessions, about ±2ms. The optimal classifier selected three champion features, namely, signal-to-noise ratio, the signal's peak magnitude response from 5 to 15Hz and the peak-to-peak amplitude of the trace between 70 and 250 ms. The classifier algorithm can separate reliable and noisy traces with a high success rate, typically 93%. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
韩明
2013-01-01
作者以前提出了一种新的参数估计方法——E-Bayes估计法,对二项分布的可靠度,给出了E-Bayes估计的定义、E-Bayes估计和多层Bayes估计公式,但没有给出E-Bayes估计的性质.该文给出了二项分布可靠度F-Bayes估计的性质.%Previously, the author introduces a new parameter estimation method-E-Bayesian estimation method, to estimate the reliability derived form Binomial distribution, the definition of E-Bayesian estimation of the reliability is provided; moreover, formulas of E-Bayesian estimation and hierarchical Bayesian estimation for the reliability are also provided, but the author did not provide propertiy of E-Bayesian estimation. This paper, properties of E-Bayesian estimation are provided.
A new lifetime estimation model for a quicker LED reliability prediction
Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.
2014-09-01
LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.
Diabatic heating rate estimates from European Centre for Medium-Range Weather Forecasts analyses
Christy, John R.
1991-01-01
Vertically integrated diabatic heating rate estimates (H) calculated from 32 months of European Center for Medium-Range Weather Forecasts daily analyses (May 1985-December 1987) are determined as residuals of the thermodynamic equation in pressure coordinates. Values for global, hemispheric, zonal, and grid point H are given as they vary over the time period examined. The distribution of H is compared with previous results and with outgoing longwave radiation (OLR) measurements. The most significant negative correlations between H and OLR occur for (1) tropical and Northern-Hemisphere mid-latitude oceanic areas and (2) zonal and hemispheric mean values for periods less than 90 days. Largest positive correlations are seen in periods greater than 90 days for the Northern Hemispheric mean and continental areas of North Africa, North America, northern Asia, and Antarctica. The physical basis for these relationships is discussed. An interyear comparison between 1986 and 1987 reveals the ENSO signal.
Estimation of the RF Characteristics of Absorbing Materials in Broad RF Frequency Ranges
Fandos, R
2008-01-01
Absorbing materials are very often used in RF applications. Their electromagnetic characteristics (relative permittivity Îµr, loss tangent tan Î´ and conductivity Ï) are needed in order to obtain a high-quality design of the absorbing pieces in the frequency range of interest. Unfortunately, suppliers often do not provide these quantities. A simple technique to determine them, based on the RF measurement of the disturbance created by the insertion of a piece of absorber in a waveguide, is presented in this note. Results for samples of two different materials, silicon carbide and aluminum nitride are presented. While the former has a negligible conductivity at the working frequencies, the conductivity of the latter has to be taken into account in order to obtain a meaningful estimation of Îµr and tan Î´. The equations of Kramers & Kronig have been applied to the data as a cross check, confirming the results.
Do tests devised to detect recent HIV-1 infection provide reliable estimates of incidence in Africa?
Sakarovitch, Charlotte; Rouet, Francois; Murphy, Gary; Minga, Albert K; Alioum, Ahmadou; Dabis, Francois; Costagliola, Dominique; Salamon, Roger; Parry, John V; Barin, Francis
2007-05-01
The objective of this study was to assess the performance of 4 biologic tests designed to detect recent HIV-1 infections in estimating incidence in West Africa (BED, Vironostika, Avidity, and IDE-V3). These tests were assessed on a panel of 135 samples from 79 HIV-1-positive regular blood donors from Abidjan, Côte d'Ivoire, whose date of seroconversion was known (Agence Nationale de Recherches sur le SIDA et les Hépatites Virales 1220 cohort). The 135 samples included 26 from recently infected patients (180 days), and 15 from patients with clinical AIDS. The performance of each assay in estimating HIV incidence was assessed through simulations. The modified commercial assays gave the best results for sensitivity (100% for both), and the IDE-V3 technique gave the best result for specificity (96.3%). In a context like Abidjan, with a 10% HIV-1 prevalence associated with a 1% annual incidence, the estimated test-specific annual incidence rates would be 1.2% (IDE-V3), 5.5% (Vironostika), 6.2% (BED), and 11.2% (Avidity). Most of the specimens falsely classified as incident cases were from patients infected for >180 days but <1 year. The authors conclude that none of the 4 methods could currently be used to estimate HIV-1 incidence routinely in Côte d'Ivoire but that further adaptations might enhance their accuracy.
Limits to the reliability of size-based fishing status estimation for data-poor stocks
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
2015-01-01
in more than 60% of the cases, and almost always correctly assess whether a stock is subject to overfishing. Adding information about age, i.e., assuming that growth rate and asymptotic size are known, does not improve the estimation. Only knowledge of the ratio between mortality and growth led...
Reliability-based weighting of visual and vestibular cues in displacement estimation
Horst, A.C. ter; Koppen, M.G.M.; Selen, L.P.J.; Medendorp, W.P.
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimat
Lauritsen, Jakob; Gundgaard, Maria G; Mortensen, Mette S
2014-01-01
Estimates of glomerular filtration rate (eGFR) are widely used when administering nephrotoxic chemotherapy. No studies performed in oncology patients have shown whether eGFR can safely substitute a measured GFR (mGFR) based on a marker method. We aimed to assess the validity of four major formula...
Limits to the reliability of size-based fishing status estimation for data-poor stocks
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
2015-01-01
in more than 60% of the cases, and almost always correctly assess whether a stock is subject to overfishing. Adding information about age, i.e., assuming that growth rate and asymptotic size are known, does not improve the estimation. Only knowledge of the ratio between mortality and growth led...
Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).
Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre
2007-05-01
Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.
Recent advances of VADASE to enhance reliability and accuracy of real-time displacements estimation
Savastano, Giorgio; Fratarcangeli, Francesca; Chiara D'Achille, Maria; Mazzoni, Augusto; Crespi, Mattia
2017-04-01
VADASE (Variometric Approach for Displacements Analysis Stand-alone Engine) is a relatively new processing approach (2011), able to estimate in real-time velocities and displacements in a global reference frame (ITRF), using high-rate (1 Hz or more) carrier phase observations and broadcast products (orbits, clocks) collected by a stand-alone GNSS, achieving an accuracy within 1-2 centimetres (usually better) over intervals up to few minutes. VADASE was originally developed within GNSS Seismology, but it was conveniently applied also to structural monitoring. It is well known from the very beginning that VADASE displacements might be impacted by two different effects: spurious spikes in the velocities due to outliers (in this case, displacements, obtained through velocities integration, are severely corrupted), and trends in the displacements (mainly due to broadcast orbit and clock errors). Moreover, for applications to earthquakes (seismic inversion), it is quite useful to estimate in real-time the so-called coseismic displacement. In fact, this displacement could be in theory estimated also in post-processing mode, using GNSS data collected over suitable long intervals before and after the earthquake; anyway, in case of strong earthquakes (for which VADASE can give significant contributions even quite close to the epicentre, since GNSS does not clip) a significant number of strong replicas usually follow the main shock in a short time, so that it may be (very) difficult to select the mentioned long data intervals. These three issues (outliers in velocity, trends in displacements and real-time coseismic displacements) were addressed in recent advances of VADASE. Two strategies were introduced, respectively based on Leave-One-Out Cross Validation (VADASE-LOO) for a receiver autonomous outliers detection, and on a network augmentation strategy to filter common trend out (A-VADASE); they can be combined (1st VADASE-LOO, 2nd A-VADASE) for a complete solution
Estimated Wind River Range (Wyoming, USA Glacier Melt Water Contributions to Agriculture
Larry Pochop
2009-10-01
Full Text Available In 2008, Wyoming was ranked 8th in barley production and 20th in hay production in the United States and these crops support Wyoming’s $800 million cattle industry. However, with a mean elevation of 2,040 meters, much of Wyoming has a limited crop growing season (as little as 60 days and relies on late-summer and early-fall streamflow for agricultural water supply. Wyoming is host to over 80 glaciers with the majority of these glaciers being located in the Wind River Range. These “frozen reservoirs” provide a stable source of streamflow (glacier meltwater during this critical late-summer and early-fall growing season. Given the potential impacts of climate change (increased temperatures resulting in glacier recession, the quantification of glacier meltwater during the late-summer and early-fall growing seasons is needed. Glacier area changes in the Wind River Range were estimated for 42 glaciers using Landsat data from 1985 to 2005. The total surface area of the 42 glaciers was calculated to be 41.2 ± 11.7 km2 in 1985 and 30.8 ± 8.2 km2 in 2005, an average decrease of 25% over the 21 year period. Small glaciers experienced noticeably more area reduction than large glaciers. Of the 42 glaciers analyzed, 17 had an area of greater than 0.5 km2 in 1985, while 25 were less than 0.5 km2 in 1985. The glaciers with a surface area less than 0.5 km2 experienced an average surface area loss (fraction of 1985 surface area of 43%, while the larger glaciers (greater than 0.5 km2 experienced an average surface area loss of 22%. Applying area-volume scaling relationships for glaciers, volume loss was estimated to be 409 × 106 m3 over the 21 year period, which results in an estimated 4% to 10% contribution to warm season (July–October streamflow.
Reliability Estimating Procedures for Electric and Thermochemical Propulsion Systems. Volume 2
1977-02-01
the number of operating cycles. For special cases (such as engine valves, injectors , thrust chambers) where design approaches were varied, the design...Heaters, External HET Thruster 56. Injector (including Trim Orifice) 56.1 Plugging IMP 56.2 Fracture IMF56.3 Injector Seal Leak ISL 56.4 Injector ...Southern California Investigation of the Feasibility of the Delphi Techniqtue for Estimating Risk Analysis Parameters, University of Southern California
Hertel, Dirk
2009-01-01
In the emerging field of automotive vision, video capture is the critical front-end of driver assistance and active safety systems. Previous photospace measurements have shown that light levels in natural traffic scenes may contain an extremely wide intra-scene intensity range. This requires the camera to have a wide dynamic range (WDR) for it to adapt quickly to changing lighting conditions and to reliably capture all scene detail. Multiple-slope CMOS technology offers a cost-effective way of adaptively extending dynamic range by partially resetting (recharging) the CMOS pixel once or more often within each frame time. This avoids saturation and leads to a response curve with piecewise linear slopes of progressively increasing compression. It was observed that the image quality from multiple-slope image capture is strongly dependent on the control (height and time) of each reset barrier. As compression and thus dynamic range increase there is a trade-off against contrast and detail loss. Incremental signal-to-noise ratio (iSNR) is proposed in ISO 15739 for determining dynamic range. Measurements and computer simulations revealed that the observed trade-off between WDR extension and the loss of local detail could be explained by a drop in iSNR at each reset point. If a reset barrier is not optimally placed then iSNR may drop below the detection limit so that an 'iSNR hole' appears in the dynamic range. Thus ISO 15739 iSNR has gained extended utility: it not only measures the dynamic range limits but also defines dynamic range as the intensity range where detail detection is reliable. It has become a critical criterion when designing adaptive barrier control algorithms that maximize dynamic range while maintaining the minimum necessary level of detection reliability.
Sathyachandran, S. K.; Roy, D. P.; Boschetti, L.
2014-12-01
The Fire Radiative Power (FRP) [MW] is a measure of the rate of biomass combustion and can be retrieved from ground based and satellite observations using middle infra-red measurements. The temporal integral of FRP is the Fire Radiative Energy (FRE) [MJ] and is related linearly to the total biomass consumption and so pyrogenic emissions. Satellite derived biomass consumption and emissions estimates have been derived conventionally by computing the summed total FRP, or the average FRP (arithmetic average of FRP retrievals), over spatial geographic grids for fixed time periods. These two methods are prone to estimation bias, especially under irregular sampling conditions such as provided by polar-orbiting satellites, because the FRP can vary rapidly in space and time as a function of the fire behavior. Linear temporal integration of FRP taking into account when the FRP values were observed and using the trapezoidal rule for numerical integration has been suggested as an alternate FRE estimation method. In this study FRP data measured rapidly with a dual-band radiometer over eight prescribed fires are used to compute eight FRE values using the sum, mean and trapezoidal estimation approaches under a variety of simulated irregular sampling conditions. The estimated values are compared to biomass consumed measurements for each of the eight fires to provide insights into which method provides more accurate and precise biomass consumption estimates. The three methods are also applied to continental MODIS FRP data to study their differences using polar orbiting satellite data. The research findings indicate that trapezoidal FRP numerical integration provides the most reliable estimator.
Alessandro Barbiero
2014-01-01
Full Text Available In many statistical applications, it is often necessary to obtain an interval estimate for an unknown proportion or probability or, more generally, for a parameter whose natural space is the unit interval. The customary approximate two-sided confidence interval for such a parameter, based on some version of the central limit theorem, is known to be unsatisfactory when its true value is close to zero or one or when the sample size is small. A possible way to tackle this issue is the transformation of the data through a proper function that is able to make the approximation to the normal distribution less coarse. In this paper, we study the application of several of these transformations to the context of the estimation of the reliability parameter for stress-strength models, with a special focus on Poisson distribution. From this work, some practical hints emerge on which transformation may more efficiently improve standard confidence intervals in which scenarios.
O'Rourke, Justin J.F.; Adams, William H.; Duff, Kevin; Byars, Joanne; Nopoulos, Peg; Paulsen, Jane S.; Beglinger, Leigh J.
2011-01-01
The estimation of premorbid abilities is an essential part of a neuropsychological evaluation, especially in neurodegenerative conditions. Although word pronunciation tests are one standard method for estimating the premorbid level, research suggests that these tests may not be valid in neurodegenerative diseases. Therefore, the current study sought to examine two estimates of premorbid intellect, the Wide Range Achievement Test (WRAT) Reading subtest and the Barona formula, in 93 patients with mild to moderate Huntington's disease (HD) to determine their utility and to investigate how these measures relate to signs and symptoms of disease progression. In 89% of participants, WRAT estimates were below the Barona estimates. WRAT estimates were related to worsening memory and motor functioning, whereas the Barona estimates had weaker relationships. Neither estimate was related to depression or functional capacity. Irregular word reading tests appear to decline with HD progression, whereas estimation methods based on demographic factors may be more robust but overestimate premorbid functioning. PMID:21147861
Wouter D Weeda
Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.
ErpICASSO: a tool for reliability estimates of independent components in EEG event-related analysis.
Artoni, Fiorenzo; Gemignani, Angelo; Sebastiani, Laura; Bedini, Remo; Landi, Alberto; Menicucci, Danilo
2012-01-01
Independent component analysis and blind source separation methods are steadily gaining popularity for separating individual brain and non-brain source signals mixed by volume conduction in electroencephalographic data. Despite the advancements on these techniques, determining the number of embedded sources and their reliability are still open issues. In particular to date no method takes into account trial-to-trial variability in order to provide a reliability measure of independent components extracted in Event Related Potentials (ERPs) studies. In this work we present ErpICASSO, a new method which modifies a data-driven approach named ICASSO for the analysis of trials (epochs). In addition to ICASSO the method enables the user to estimate the number of embedded sources, and provides a quality index of each extracted ERP component by combining trial-to-trial bootstrapping and CCA projection. We applied ErpICASSO on ERPs recorded from 14 subjects presented with unpleasant and neutral pictures. We separated potentials putatively related to different systems and identified the four primary ERP independent sources. Standing on the confidence interval estimated by ErpICASSO, we were able to compare the components between neutral and unpleasant conditions. ErpICASSO yielded encouraging results, thus providing the scientific community with a useful tool for ICA signal processing whenever dealing with trials recorded in different conditions.
Mohammed, Rezwana Begum; Kalyan, V Siva; Tircouveluri, Saritha; Vegesna, Goutham Chakravarthy; Chirla, Anil; Varma, D Maruthi
2014-07-01
Determining the age of a person in the absence of documentary evidence of birth is essential for legal and medico-legal purpose. Fishman method of skeletal maturation is widely used for this purpose; however, the reliability of this method for people with all geographic locations is not well-established. In this study, we assessed various stages of carpal and metacarpal bone maturation and tested the reliability of Fishman method of skeletal maturation to estimate the age in South Indian population. We also evaluated the correlation between the chronological age (CA) and predicted age based on the Fishman method of skeletal maturation. Digital right hand-wrist radiographs of 330 individuals aged 9-20 years were obtained and the skeletal maturity stage for each subject was determined using Fishman method. The skeletal maturation indicator scores were obtained and analyzed with reference to CA and sex. Data was analyzed using the SPSS software package (version 12, SPSS Inc., Chicago, IL, USA). The study subjects had a tendency toward late maturation with the mean skeletal age (SA) estimated being significantly lowers (P maturity stages. Nevertheless, significant correlation was observed in this study between SA and CA for males (r = 0.82) and females (r = 0.85). Interestingly, female subjects were observed to be advanced in SA compared with males. Fishman method of skeletal maturation can be used as an alternative tool for the assessment of mean age of an individual of unknown CA in South Indian children.
Kyeremateng, Samuel O; Pudlas, Marieke; Woehrle, Gerd H
2014-09-01
A novel empirical analytical approach for estimating solubility of crystalline drugs in polymers has been developed. The approach utilizes a combination of differential scanning calorimetry measurements and a reliable mathematical algorithm to construct complete solubility curve of a drug in polymer. Compared with existing methods, this novel approach reduces the required experimentation time and amount of material by approximately 80%. The predictive power and relevance of such solubility curves in development of amorphous solid dispersion (ASD) formulations are shown by applications to a number of hot-melt extrudate formulations of ibuprofen and naproxen in Soluplus. On the basis of the temperature-drug load diagrams using the solubility curves and the glass transition temperatures, physical stability of the extrudate formulations was predicted and checked by placing the formulations on real-time stability studies. An analysis of the stability samples with microscopy, thermal, and imaging techniques confirmed the predicted physical stability of the formulations. In conclusion, this study presents a fast and reliable approach for estimating solubility of crystalline drugs in polymer matrixes. This powerful approach can be applied by formulation scientists as an early and convenient tool in designing ASD formulations for maximum drug load and physical stability.
Bustos, Cesar; Sandeen, Ben; Chennakesavalu, Shriram; Littenberg, Tyson; Farr, Ben; Kalogera, Vassiliki
2016-01-01
Gravitational Waves (GWs) were predicted by Einstein's Theory of General Relativity as ripples in space-time that propagate outward from a source. Strong GW sources consist of compact binary systems such as Binary Neutron Stars (BNS) or Binary Black Holes (BBHs) that experience orbital shrinkage (inspiral) and eventual merger. Indirect evidence for the existence of GWs has been obtained through radio pulsar studies in BNS systems. A study of BBHs and other compact objects has limitations in the electromagnetic spectrum, therefore direct detections of GWs will open a new window into their nature. The effort targeting direct GWs detection is anchored on the development of a detector known as Advanced LIGO (Laser Interferometer Gravitational Wave Observation). Although detecting GW sources represents an anticipated breakthrough in physics, making GW astrophysics a reality critically relies on our ability to determine and measure the physical parameters associated with GW sources. We use Markov Chain Monte Carlo (MCMC) simulations on high-performance computing clusters for parameter estimation on high dimensional spaces (GW sources - 15 parameters). The quality of GW parameter estimation greatly depends on having the best possible knowledge of the expected waveform. Unfortunately, BBH GW production is very complex and our best waveforms are not valid across the full parameter space. With large-scale simulations we examine quantitatively the limitations of these waveforms in terms of extracting the astrophysical properties of BBH GW sources. We find that current waveforms are inadequate for BBH of unequal masses and demonstrate that improved waveforms are critically needed.
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...
Nguyen, Thi-Hau; Ranwez, Vincent; Berry, Vincent; Scornavacca, Celine
2013-01-01
The genome content of extant species is derived from that of ancestral genomes, distorted by evolutionary events such as gene duplications, transfers and losses. Reconciliation methods aim at recovering such events and at localizing them in the species history, by comparing gene family trees to species trees. These methods play an important role in studying genome evolution as well as in inferring orthology relationships. A major issue with reconciliation methods is that the reliability of predicted evolutionary events may be questioned for various reasons: Firstly, there may be multiple equally optimal reconciliations for a given species tree–gene tree pair. Secondly, reconciliation methods can be misled by inaccurate gene or species trees. Thirdly, predicted events may fluctuate with method parameters such as the cost or rate of elementary events. For all of these reasons, confidence values for predicted evolutionary events are sorely needed. It was recently suggested that the frequency of each event in the set of all optimal reconciliations could be used as a support measure. We put this proposition to the test here and also consider a variant where the support measure is obtained by additionally accounting for suboptimal reconciliations. Experiments on simulated data show the relevance of event supports computed by both methods, while resorting to suboptimal sampling was shown to be more effective. Unfortunately, we also show that, unlike the majority-rule consensus tree for phylogenies, there is no guarantee that a single reconciliation can contain all events having above 50% support. In this paper, we detail how to rely on the reconciliation graph to efficiently identify the median reconciliation. Such median reconciliation can be found in polynomial time within the potentially exponential set of most parsimonious reconciliations. PMID:24124449
Online Reliable Peak Charge/Discharge Power Estimation of Series-Connected Lithium-Ion Battery Packs
Bo Jiang
2017-03-01
Full Text Available The accurate peak power estimation of a battery pack is essential to the power-train control of electric vehicles (EVs. It helps to evaluate the maximum charge and discharge capability of the battery system, and thus to optimally control the power-train system to meet the requirement of acceleration, gradient climbing and regenerative braking while achieving a high energy efficiency. A novel online peak power estimation method for series-connected lithium-ion battery packs is proposed, which considers the influence of cell difference on the peak power of the battery packs. A new parameter identification algorithm based on adaptive ratio vectors is designed to online identify the parameters of each individual cell in a series-connected battery pack. The ratio vectors reflecting cell difference are deduced strictly based on the analysis of battery characteristics. Based on the online parameter identification, the peak power estimation considering cell difference is further developed. Some validation experiments in different battery aging conditions and with different current profiles have been implemented to verify the proposed method. The results indicate that the ratio vector-based identification algorithm can achieve the same accuracy as the repetitive RLS (recursive least squares based identification while evidently reducing the computation cost, and the proposed peak power estimation method is more effective and reliable for series-connected battery packs due to the consideration of cell difference.
How reliable is estimation of glomerular filtration rate at diagnosis of type 2 diabetes?
Chudleigh, Richard A; Dunseath, Gareth; Evans, William; Harvey, John N; Evans, Philip; Ollerton, Richard; Owens, David R
2007-02-01
The Cockcroft-Gault (CG) and Modification of Diet in Renal Disease (MDRD) equations previously have been recommended to estimate glomerular filtration rate (GFR). We compared both estimates with true GFR, measured by the isotopic (51)Cr-EDTA method, in newly diagnosed, treatment-naïve subjects with type 2 diabetes. A total of 292 mainly normoalbuminuric (241 of 292) subjects were recruited. Subjects were classified as having mild renal impairment (group 1, GFR /=90 ml/min per 1.73 m(2)). Estimated GFR (eGFR) was calculated by the CG and MDRD equations. Blood samples drawn at 44, 120, 180, and 240 min after administration of 1 MBq of (51)Cr-EDTA were used to measure isotopic GFR (iGFR). For subjects in group 1, mean (+/-SD) iGFR was 83.8 +/- 4.3 ml/min per 1.73 m(2). eGFR was 78.0 +/- 16.5 or 73.7 +/- 12.0 ml/min per 1.73 m(2) using CG and MDRD equations, respectively. Ninety-five percent CIs for method bias were -11.1 to -0.6 using CG and -14.4 to -7.0 using MDRD. Ninety-five percent limits of agreement (mean bias +/- 2 SD) were -37.2 to 25.6 and -33.1 to 11.7, respectively. In group 2, iGFR was 119.4 +/- 20.3 ml/min per 1.73 m(2). eGFR was 104.4 +/- 26.3 or 92.3 +/- 18.7 ml/min per 1.73 m(2) using CG and MDRD equations, respectively. Ninety-five percent CIs for method bias were -17.4 to -12.5 using CG and -29.1 to -25.1 using MDRD. Ninety-five percent limits of agreement were -54.4 to 24.4 and -59.5 to 5.3, respectively. In newly diagnosed type 2 diabetic patients, particularly those with a GFR >/=90 ml/min per 1.73 m(2), both CG and MDRD equations significantly underestimate iGFR. This highlights a limitation in the use of eGFR in the majority of diabetic subjects outside the setting of chronic kidney disease.
Kun Qian
2016-10-01
Full Text Available When service robots work in human environments, unexpected and unknown moving people may deteriorate the convergence of robot localization or even cause failure localization if the environment is crowded. In this article, a multisensor observation localizability estimation method is proposed and implemented for supporting reliable robot localization in unstructured environments with low-cost sensors. The contribution of the approach is a strategy that combines noisy laser range-finder data and RGB-D data for estimating the dynamic localizability matrix in a probabilistic framework. By aligning two sensor frames, the unreliable part of the laser readings that hits unexpected moving people is fast extracted according to the output of a RGB-D-based human detector, so that the influence of unexpected moving people on laser observations can be explicitly factored out. The method is easy for implementation and is highly desirable to ensure robustness and real-time performance for long-term operation in populated environments. Comparative experiments are conducted and the results confirm the effectiveness and reliability of the proposed method in improving the localization accuracy and reliability in dynamic environments.
CO{sub 2}-recycling by plants: how reliable is the carbon isotope estimation?
Siegwolf, R.T.W.; Saurer, M. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Koerner, C. [Basel Univ., Basel (Switzerland)
1997-06-01
In the study of plant carbon relations, the amount of the respiratory losses from the soil was estimated, determining the gradient of the stable isotope {sup 13}C with increasing plant canopy height. According to the literature 8-26% of the CO{sub 2} released in the forests by soil and plant respiratory processes are reassimilated (recycled) by photosynthesis during the day. Our own measurements however, which we conducted in grass land showed diverging results from no indicating of carbon recycling, to a considerable {delta}{sup 13}C gradient suggesting a high carbon recycling rate. The role of other factors, such as air humidity and irradiation which influence the {delta}{sup 13}C in a canopy as well, are discussed. (author) 3 figs., 4 refs.
Nakayachi, Kazuya; Watabe, Motoki
2005-08-01
This research examined the effects of providing a monitoring and self-sanctioning system, called "hostage posting" in economics, on the improvement of trustworthiness. We conducted two questionnaire-type experiments to compare the trust-improving effects among the three conditions, (a) a voluntary provision of a monitoring and self-sanction system by the manager, (b) an imposed provision, and (c) an achievement of satisfactory management without any types of provisions. Total of 561 undergraduate students participated in the experiments. Results revealed that perceived integrity and competence were improved to almost the same level in both conditions (a) and (c), whereas these were not improved in condition (b). Consistent with our previous research, these results showed that the voluntary hostage posting improved trustworthiness level as well as a good performance did. The estimation of necessity of the system, however, was not different across these conditions. The implications for management practice and directions for future research were discussed.
Are there reliable methods to estimate the nuclear orientation of Seyfert galaxies?
Marin, F
2016-01-01
Orientation, together with accretion and evolution, is one of the three main drivers in the Grand Unification of Active Galactic Nuclei (AGN). Being unresolved, determining the true inclination of those powerful sources is always difficult and indirect, yet it remains a vital clue to apprehend the numerous, panchromatic and complex spectroscopic features we detect. There are only a hundred inclinations derived so far; in this context, can we be sure that we measure the true orientation of AGN? To answer this question, four methods to estimate the nuclear inclination of AGN are investigated and compared to inclination-dependent observables (hydrogen column density, Balmer linewidth, optical polarization, and flux ratios within the IR and relative to X-rays). Among these orientation indicators, the method developed by Fisher, Crenshaw, Kraemer et al., mapping and modeling the radial velocities of the [O iii] emission region in AGN, is the most successful. The [O iii]-mapping technique shows highly statistically...
On the reliability of direct Rayleigh-wave estimation from multicomponent cross-correlations
Xu, Zongbo; Mikesell, T. Dylan
2017-09-01
Seismic interferometry is routinely used to image and characterize underground geology. The vertical component cross-correlations (CZZ) are often analysed in this process; although one can also use radial component and multicomponent cross-correlations (CRR and CZR, respectively), which have been shown to provide a more accurate Rayleigh-wave Green's function than CZZ when sources are unevenly distributed. In this letter, we identify the relationship between the multicomponent cross-correlations (CZR and CRR) and the Rayleigh-wave Green's functions to show why CZR and CRR are less sensitive than CZZ to non-stationary phase source energy. We demonstrate the robustness of CRR with a synthetic seismic noise data example. These results provide a compelling reason as to why CRR should be used to estimate the dispersive characteristics of the direct Rayleigh wave with seismic interferometry when the signal-to-noise ratio is high.
Klarskov, Carina Kirstine; Klarskov, Mikkel Buster; Hasseldam, Henrik
2016-01-01
Background: Stroke is the second most common cause of death worldwide. Only one treatment for acute ischemic stroke is currently available, thrombolysis with rt-PA, but it is limited in its use. Many efforts have been invested in order to find additive treatments, without success.A multitude...... of reasons for the translational problems from mouse experimental stroke to clinical trials probably exists, including infarct size estimations around the peak time of edema formation. Furthermore, edema is a more prominent feature of stroke in mice than in humans, because of the tendency to produce larger...... infarcts with more substantial edema. Purpose: This paper will give an overview of previous studies of experimental mouse stroke, and correlate survival time to peak time of edema formation. Furthermore, investigations of whether the included studies corrected the infarct measurements for edema...
A Framework for Estimating Piping Reliability Subject to Corrosion Under Insulation
Mokhtar Ainul Akmar
2014-07-01
Full Text Available Corrosion under insulation (CUI is one of the serious damage mechanisms experienced by insulated piping systems. Optimizing the inspection schedule for insulated piping systems is a major challenge faced by inspection and corrosion engineers since CUI takes place beneath the insulation which makes the detection and prediction of the damage mechanism harder. In recent years, risk-based inspection (RBI approach has been adopted to optimize CUI inspection plan. RBI approach is based on risk, a product of the likelihood of a failure and the consequence of such failure. The likelihood analysis usually follows either the qualitative or the semi-qualitative methods, thus precluding it to be used for quantitative risk assessment. This paper presents a framework for estimating quantitatively the likelihood of failure due to CUI based on the type of data available.
Lim, Jin-Yong; Kim, Tae-Ho; Lee, Jung-Seok
2015-10-01
[Purpose] The purpose of this study was to compare the reliability of the measurement of the passive range of motion (PROM) of shoulder horizontal adduction (SHA) measurements using a smartphone for the assessment posterior shoulder tightness (PST) between the side-lying and supine test positions. [Subjects and Methods] Forty-seven subjects (mean ± age, 24.9 ± 3.5 years) without shoulder pathology were included in this study. Intra-rater and inter-rater reliabilities were determined using intraclass correlation coefficients. The SHA PROM of each subject's dominant shoulder was measured using a smartphone by two investigators in two positions: the standard supine position, and a side-lying position on the tested side. [Results] The intra-rater reliability of the supine measurements was fair to good (ICC3,1 = 0.72-0.89), and for the side-lying measurements was excellent (ICC3,1 = 0.95-0.97). The inter-rater reliability of the supine measurements was fair (ICC2,2 = 0.79) and for the side-lying measurements was excellent (ICC2,2 = 0.94). [Conclusion] These results suggest that for healthy subjects, measurements of SHA using smartphones in the side-lying position has superior intra-rater and inter-rater reliabilities compared to the standard supine position.
ESTIMATION OF VERTICAL DEFLECTIONS IN CONCRETE BEAMS THROUGH DIGITAL CLOSE RANGE PHOTOGRAMMETRY
I. Detchev
2012-09-01
Full Text Available Deformation monitoring, and in general structural health monitoring, of civil infrastructure systems is important in terms of both safety and serviceability. Traditionally, large structures have been monitored using surveying techniques, while fine-scale monitoring of structural components has been done with geotechnical instrumentation. This paper reviews the advantages and disadvantages of using remote sensing methods, such as terrestrial laser scanning and digital close range photogrammetry, for the purposes of precise 3D reconstruction and the estimation of deflections in structural materials. It is also shown how a low-cost setup of multiple digital cameras and projectors can be used for the monitoring of concrete beams subjected to different loading conditions by a hydraulic actuator. The photogrammetric system used does not require any physical targets other than for the purposes of establishing the relative orientation between the involved cameras. The setup was tested in two experiments, and the beam deflections resulted from the photogrammetric system were compared to the ones from a set of one-dimensional laser transducers and a terrestrial laser scanner. The experiments proved that it was possible to detect sub-millimetre level deformations given the used equipment and the geometry of the setup.
Phan, An; Ng, Brian; Tran, Hai-Tan
2016-07-01
In automatic target recognition systems based on the use of inverse synthetic aperture radar (ISAR) images, it is essential to obtain unbiased and accurate scaled two-dimensional target images in the range-cross range domain. To accomplish this, the modulus of the target effective rotation vector, which is generally unknown for noncooperative targets, must be estimated. This letter proposes an efficient method for estimating the cross-range scaling factor and significantly improving cross-range resolution based on the second-order local polynomial Fourier transform. The estimation requires solving a series of one-dimensional optimizations of a kurtosis objective. Simulations show the proposed approach to be effective and able to accurately estimate the scaling factor in the presence of noise.
Reliable estimation of adsorption isotherm parameters using adequate pore size distribution
Husseinzadeh, Danial; Shahsavand, Akbar [Ferdowsi University of Mashhad, Mashhad (Iran, Islamic Republic of)
2015-05-15
The equilibrium adsorption isotherm has a crucial effect on various characteristics of the solid adsorbent (e.g., pore volume, bulk density, surface area, pore geometry). A historical paradox exists in conventional estimation of adsorption isotherm parameters. Traditionally, the total amount of adsorb material (total adsorption isotherm) has been considered equivalent to the local adsorption isotherm. This assumption is only valid when the corresponding pore size or energy distribution (PSD or ED) of the porous adsorbent can be successfully represented with the Dirac delta function. In practice, the actual PSD (or ED) is far from such assumption, and the traditional method for prediction of local adsorption isotherm parameters leads to serious errors. Up to now, the powerful combination of inverse theory and linear regularization technique has drastically failed when used for extraction of PSD from real adsorption data. For this reason, all previous researches used synthetic data because they were not able to extract proper PSD from the measured total adsorption isotherm with unrealistic parameters of local adsorption isotherm. We propose a novel approach that can successfully provide the correct values of local adsorption isotherm parameters without any a priori and unrealistic assumptions. Two distinct methods are suggested and several illustrative (synthetic and real experimental) examples are presented to clearly demonstrate the effectiveness of the newly proposed methods on computing the correct values of local adsorption isotherm parameters. The so-called Iterative and Optima methods' impressive performances on extraction of correct PSD are validated using several experimental data sets.
Estimation of AM fungal colonization - Comparability and reliability of classical methods.
Füzy, Anna; Biró, Ibolya; Kovács, Ramóna; Takács, Tünde
2015-12-01
The characterization of mycorrhizal status in hosts can be a good indicator of symbiotic associations in inoculation experiments or in ecological research. The most common microscopic-based observation methods, such as (i) the gridline intersect method, (ii) the magnified intersections method and (iii) the five-class system of Trouvelot were tested to find the most simple, easily executable, effective and objective ones and their appropriate parameters for characterization of mycorrhizal status. In a pot experiment, white clover (Trifolium repens L.) host plant was inoculated with 6 (BEG144; syn. Rhizophagus intradices) in pumice substrate to monitor the AMF colonization properties during host growth. Eleven (seven classical and four new) colonization parameters were estimated by three researchers in twelve sampling times during plant growth. Variations among methods, observers, parallels, or individual plants were determined and analysed to select the most appropriate parameters and sampling times for monitoring. The comparability of the parameters of the three methods was also tested. As a result of the experiment classical parameters were selected for hyphal colonization: colonization frequency in the first stage or colonization density in the later period, and arbuscular richness of roots. A new parameter was recommended to determine vesicule and spore content of colonized roots at later stages of symbiosis.
Bayesian Regression and Neuro-Fuzzy Methods Reliability Assessment for Estimating Streamflow
Yaseen A. Hamaamin
2016-07-01
Full Text Available Accurate and efficient estimation of streamflow in a watershed’s tributaries is prerequisite parameter for viable water resources management. This study couples process-driven and data-driven methods of streamflow forecasting as a more efficient and cost-effective approach to water resources planning and management. Two data-driven methods, Bayesian regression and adaptive neuro-fuzzy inference system (ANFIS, were tested separately as a faster alternative to a calibrated and validated Soil and Water Assessment Tool (SWAT model to predict streamflow in the Saginaw River Watershed of Michigan. For the data-driven modeling process, four structures were assumed and tested: general, temporal, spatial, and spatiotemporal. Results showed that both Bayesian regression and ANFIS can replicate global (watershed and local (subbasin results similar to a calibrated SWAT model. At the global level, Bayesian regression and ANFIS model performance were satisfactory based on Nash-Sutcliffe efficiencies of 0.99 and 0.97, respectively. At the subbasin level, Bayesian regression and ANFIS models were satisfactory for 155 and 151 subbasins out of 155 subbasins, respectively. Overall, the most accurate method was a spatiotemporal Bayesian regression model that outperformed other models at global and local scales. However, all ANFIS models performed satisfactory at both scales.
Almeida, Mariana R; Fidelis, Carlos H V; Barata, Lauro E S; Poppi, Ronei J
2013-12-15
The Amazon tree Aniba rosaeodora Ducke (rosewood) provides an essential oil valuable for the perfume industry, but after decades of predatory extraction it is at risk of extinction. The extraction of the essential oil from wood implies the cutting of the tree, and then the study of oil extracted from the leaves is important as a sustainable alternative. The goal of this study was to test the applicability of Raman spectroscopy and Partial Least Square Discriminant Analysis (PLS-DA) as means to classify the essential oil extracted from different parties (wood, leaves and branches) of the Brazilian tree A. rosaeodora. For the development of classification models, the Raman spectra were split into two sets: training and test. The value of the limit that separates the classes was calculated based on the distribution of samples of training. This value was calculated in a manner that the classes are divided with a lower probability of incorrect classification for future estimates. The best model presented sensitivity and specificity of 100%, predictive accuracy and efficiency of 100%. These results give an overall vision of the behavior of the model, but do not give information about individual samples; in this case, the confidence interval for each sample of classification was also calculated using the resampling bootstrap technique. The methodology developed have the potential to be an alternative for standard procedures used for oil analysis and it can be employed as screening method, since it is fast, non-destructive and robust.
Reliable estimation of performance of explosives without considering their heat contents.
Keshavarz, Mohammad Hossein
2007-08-25
In this paper, a new approach is introduced to calculate detonation pressure of large class of explosives based elemental composition and specific structural groups rather than using their heats of formation. It is shown here how the loading density, atomic composition and some structural parameters can be integrated into an empirical formula for predicting the detonation pressure of pure and explosive formulations over a wide range of loading densities. The results show good agreement with experimental values so that the deviations are within about experimental errors. The calculated values of new method are also compared with the computed results obtained by complex computer code using BKWR and BKWS equations of state. Predicted detonation pressures have root-mean-square (rms) deviation for new method, BKWR and BKWS equations of state are 6.5, 11.7 and 7.4kbar, respectively.
Pearson, E.; Smith, M. W.; Klaar, M. J.; Brown, L. E.
2017-09-01
High resolution topographic surveys such as those provided by Structure-from-Motion (SfM) contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-metre scale topographic variability (or 'surface roughness') to sediment grain size by deriving empirical relationships between the two. In fluvial applications, such relationships permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing improved data to drive three-dimensional hydraulic models, allowing rapid geomorphic monitoring of sub-reach river restoration projects, and enabling more robust characterisation of riverbed habitats. However, comparison of previously published roughness-grain-size relationships shows substantial variability between field sites. Using a combination of over 300 laboratory and field-based SfM surveys, we demonstrate the influence of inherent survey error, irregularity of natural gravels, particle shape, grain packing structure, sorting, and form roughness on roughness-grain-size relationships. Roughness analysis from SfM datasets can accurately predict the diameter of smooth hemispheres, though natural, irregular gravels result in a higher roughness value for a given diameter and different grain shapes yield different relationships. A suite of empirical relationships is presented as a decision tree which improves predictions of grain size. By accounting for differences in patch facies, large improvements in D50 prediction are possible. SfM is capable of providing accurate grain size estimates, although further refinement is needed for poorly sorted gravel patches, for which c-axis percentiles are better predicted than b-axis percentiles.
Evaluation of Four Encryption Algorithms for Viability, Reliability and Performance Estimation
J. B. Awotunde
2016-12-01
Full Text Available Data and information in storage, in transit or during processing are found in various computers and computing devices with wide range of hardware specifications. Cryptography is the knowledge of using codes to encrypt and decrypt data. It enables one to store sensitive information or transmit it across computer in a more secured ways so that it cannot be read by anyone except the intended receiver. Cryptography also allows secure storage of sensitive data on any computer. Cryptography as an approach to computer security comes at a cost in terms of resource utilization such as time, memory and CPU usability time which in some cases may not be in abundance to achieve the set out objective of protecting data. This work looked into the memory construction rate, different key size, CPU utilization time period and encryption speed of the four algorithms to determine the amount of computer resource that is expended and how long it takes each algorithm to complete its task. Results shows that key length of the cryptographic algorithm is proportional to the resource utilization in most cases as found out in the key length of Blowfish, AES, 3DES and DES algorithms respectively. Further research can be carried out in order to determine the power utilization of each of these algorithms.
Jakobsen, Thomas Linding; Christensen, Malene; Christensen, Stine Sommer
2010-01-01
BACKGROUND AND PURPOSE: Two of the most utilized outcome measures to assess knee joint range of motion (ROM) and intra-articular effusion are goniometry and circumference, respectively. Neither goniometry nor circumference of the knee joint have been examined for both intra-tester and inter-teste...
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
2017-07-15
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.
Kapil Yadav
2015-01-01
Full Text Available Background: Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Materials and Methods: Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD. The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. Results: The intra- and interassay imprecision for 10 parts per million (ppm, 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. Conclusions: The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.
Levillain, Joseph; Thongo M'Bou, Armel; Deleporte, Philippe; Saint-André, Laurent; Jourdan, Christophe
2011-07-01
Despite their importance for plant production, estimations of below-ground biomass and its distribution in the soil are still difficult and time consuming, and no single reliable methodology is available for different root types. To identify the best method for root biomass estimations, four different methods, with labour requirements, were tested at the same location. The four methods, applied in a 6-year-old Eucalyptus plantation in Congo, were based on different soil sampling volumes: auger (8 cm in diameter), monolith (25 × 25 cm quadrate), half Voronoi trench (1·5 m(3)) and a full Voronoi trench (3 m(3)), chosen as the reference method. With the reference method (0-1m deep), fine-root biomass (FRB, diameter biomass (MRB diameter 2-10 mm) at 2·0 t ha(-1), coarse-root biomass (CRB, diameter >10 mm) at 5·6 t ha(-1) and stump biomass at 6·8 t ha(-1). Total below-ground biomass was estimated at 16·2 t ha(-1) (root : shoot ratio equal to 0·23) for this 800 tree ha(-1) eucalypt plantation density. The density of FRB was very high (0·56 t ha(-1)) in the top soil horizon (0-3 cm layer) and decreased greatly (0·3 t ha(-1)) with depth (50-100 cm). Without labour requirement considerations, no significant differences were found between the four methods for FRB and MRB; however, CRB was better estimated by the half and full Voronoi trenches. When labour requirements were considered, the most effective method was auger coring for FRB, whereas the half and full Voronoi trenches were the most appropriate methods for MRB and CRB, respectively. As CRB combined with stumps amounted to 78 % of total below-ground biomass, a full Voronoi trench is strongly recommended when estimating total standing root biomass. Conversely, for FRB estimation, auger coring is recommended with a design pattern accounting for the spatial variability of fine-root distribution.
Thorson, James T.; Scheuerell, Mark D.; Shelton, Andrew O.;
2015-01-01
1. Predicting and explaining the distribution and density of species is one of the oldest concerns in ecology. Species distributions can be estimated using geostatistical methods, which estimate a latent spatial variable explaining observed variation in densities, but geostatistical methods may...... be imprecise for species with low densities or few observations. Additionally, simple geostatistical methods fail to account for correlations in distribution among species and generally estimate such cross-correlations as a post hoc exercise. 2. We therefore present spatial factor analysis (SFA), a spatial...
On asymptotically optimal wavelet estimation of trend functions under long-range dependence
Beran, Jan; 10.3150/10-BEJ332
2012-01-01
We consider data-adaptive wavelet estimation of a trend function in a time series model with strongly dependent Gaussian residuals. Asymptotic expressions for the optimal mean integrated squared error and corresponding optimal smoothing and resolution parameters are derived. Due to adaptation to the properties of the underlying trend function, the approach shows very good performance for smooth trend functions while remaining competitive with minimax wavelet estimation for functions with discontinuities. Simulations illustrate the asymptotic results and finite-sample behavior.
Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.
2017-09-01
The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.
da Cruz, A C S; Couto, B C; Nascimento, I A; Pereira, S A; Leite, M B N L; Bertoletti, E; Zagatto, P
2007-05-01
In spite of the consideration that toxicity testing is a reduced approach to measure the effects of pollutants on ecosystems, the early-life-stage (ELS) tests have evident ecological relevance because they reflect the possible reproductive impairment of the natural populations. The procedure and validation of Crassostrea rhizophorae embryonic development test have shown that it meets the same precision as other U.S. EPA tests, where EC(50) is generally used as a toxicological endpoint. However, the recognition that EC(50) is not the best endpoint to assess contaminant effects led U.S. EPA to recently suggest EC(25) as an alternative to estimate xenobiotic effects for pollution prevention. To provide reliability to the toxicological test results on C. rhizophorae embryos, the present work aimed to establish the critical effect level for this test organism, based on its reaction to reference toxicants, by using the statistical method proposed by Norberg-King (Inhibition Concentration, version 2.0). Oyster embryos were exposed to graded series of reference toxicants (ZnSO(4) x 7H(2)O; AgNO(3); KCl; CdCl(2)H(2)O; phenol, 4-chlorophenol and dodecyl sodium sulphate). Based on the obtained results, the critical value for C. rhizophorae embryonic development test was estimated as EC(15). The present research enhances the emerging consensus that ELS tests data would be adequate for estimating the chronic safe concentrations of pollutants in the receiving waters. Based on recommended criteria and on the results of the present research, zinc sulphate and 4-chlorophenol have been pointed out, among the inorganic and organic compounds tested, as the best reference toxicants for C. rhizophorae ELS-test.
Dietz, Kelly R. [University of Minnesota, Department of Radiology, Minneapolis, MN (United States); Zhang, Lei [University of Minnesota, Biostatistical Design and Analysis Center, Minneapolis, MN (United States); Seidel, Frank G. [Lucile Packard Children' s Hospital, Department of Radiology, Stanford, CA (United States)
2015-08-15
Prior to digital radiography it was possible for a radiologist to easily estimate the size of a patient on an analog film. Because variable magnification may be applied at the time of processing an image, it is now more difficult to visually estimate an infant's size on the monitor. Since gestational age and weight significantly impact the differential diagnosis of neonatal diseases and determine the expected size of kidneys or appearance of the brain by MRI or US, this information is useful to a pediatric radiologist. Although this information may be present in the electronic medical record, it is frequently not readily available to the pediatric radiologist at the time of image interpretation. To determine if there was a correlation between gestational age and weight of a premature infant with their transverse chest diameter (rib to rib) on admission chest radiographs. This retrospective study was approved by the institutional review board, which waived informed consent. The maximum transverse chest diameter outer rib to outer rib was measured on admission portable chest radiographs of 464 patients admitted to the neonatal intensive care unit (NICU) during the 2010 calendar year. Regression analysis was used to investigate the association between chest diameter and gestational age/birth weight. Quadratic term of chest diameter was used in the regression model. Chest diameter was statistically significantly associated with both gestational age (P < 0.0001) and birth weight (P < 0.0001). An infant's gestational age and birth weight can be reliably estimated by comparing a simple measurement of the transverse chest diameter on digital chest radiograph with the tables and graphs in our study. (orig.)
Lawes, R.
2016-12-01
The Australian grain growing region is vast and occupies where some 25 million tonnes of wheat is produced from latitudes -27 to -42, where soils, crops and climates vary considerably. Predicting the area of individual crops is time consuming and currently conducted by survey, while yield estimates are derived from these areas and from information about grain receivables with little pre-harvest information available to industry. The existing approach fails to provide reliable, timely, small scale information about production. Similarly, previous attempts to predict yield using satellite derived information rely on information collected using the existing systems to calibrate models. We have developed a crop productivity and yield model - called C-Store Crop - that uses remotely sensed vegetation indices, along with site based rainfall, radiation and temperature information. Model calibration using 3000 points derived from farmer supplied yield maps for wheat, barley, canola and chickpea showed strong relationships (>70%) between modelled plant mass and observed crop yield at the paddock scale. C-Store Crop is being applied at 250m and 25m grid resolution. Farmer supplied yield data was also used to train a combination of Radar and Landsat images collected whilst the crop is growing to discriminate between crop types. Landsat information alone was unable to discriminate legume and cereal crops. Problems such as cloud prevented accessing appropriate scenes. Inclusion of Radar information reduced errors of commission and omission. By combining the C-Store Crop model with remote estimates of crop type, we anticipate predicting crop type and crop yield with uncertainty estimates across the Australian continent.
L. Altarejos-García
2012-07-01
Full Text Available This paper addresses the use of reliability techniques such as Rosenblueth's Point-Estimate Method (PEM as a practical alternative to more precise Monte Carlo approaches to get estimates of the mean and variance of uncertain flood parameters water depth and velocity. These parameters define the flood severity, which is a concept used for decision-making in the context of flood risk assessment. The method proposed is particularly useful when the degree of complexity of the hydraulic models makes Monte Carlo inapplicable in terms of computing time, but when a measure of the variability of these parameters is still needed. The capacity of PEM, which is a special case of numerical quadrature based on orthogonal polynomials, to evaluate the first two moments of performance functions such as the water depth and velocity is demonstrated in the case of a single river reach using a 1-D HEC-RAS model. It is shown that in some cases, using a simple variable transformation, statistical distributions of both water depth and velocity approximate the lognormal. As this distribution is fully defined by its mean and variance, PEM can be used to define the full probability distribution function of these flood parameters and so allowing for probability estimations of flood severity. Then, an application of the method to the same river reach using a 2-D Shallow Water Equations (SWE model is performed. Flood maps of mean and standard deviation of water depth and velocity are obtained, and uncertainty in the extension of flooded areas with different severity levels is assessed. It is recognized, though, that whenever application of Monte Carlo method is practically feasible, it is a preferred approach.
Li, Ming; Zhang, Peidong; Leng, Jianxing
2016-03-01
This article presents an improved autocorrelation correlation function (ACF) regression method of estimating the Hurst parameter of a time series with long-range dependence (LRD) by using golden section search (GSS). We shall show that the present method is substantially efficient than the conventional ACF regression method of H estimation. Our research uses fractional Gaussian noise as a data case but the method introduced is applicable to time series with LRD in general.
Basu, Asit P; Basu, Sujit K
1998-01-01
This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul
Ege, Kerem; Laulagnet, Bernard; Guyader, Jean-Louis
2012-01-01
Regarding lightweighting structures for aeronautics, automotive or construction applications, the level of performance of solutions proposed in terms of damping and isolation is fundamental. Hence multilayered plate appears as an interesting answer if damping performances are properly optimized. In this paper, a novel modal analysis method (Ege et al, JSV 325 (4-5), 2009) is used to identify viscoelastic properties (loss factors, Young's modulus) of "polyethylene thermoplastic / aluminum" bilayer plates. The thermoplastic is chosen for its high loss factors and relative low mass. The experimental method consists in a high-resolution technique (ESPRIT algorithm) which allows precise estimations of the viscoelastic properties even in frequency domains with high modal overlap (high damping or modal density). Experimental loss factors estimated from impact hammer excitations on the free-free plates highly corresponds with two theoretical estimations. In the first model (Guyader & Lesueur, JSV 58(1), 1978) the...
Rapid estimation of frequency response functions by close-range photogrammetry
Tripp, J. S.
1985-01-01
The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.
On-Board State-of-Health Estimation at a Wide Ambient Temperature Range in Lithium-Ion Batteries
Tiansi Wang
2015-08-01
Full Text Available A state-of-health (SOH estimation method for electric vehicles (EVs is presented with three main advantages: (1 it provides joint estimation of cell’s aging states in terms of power and energy (i.e., SOHP and SOHE—because the determination of SOHP and SOHE can be reduced to the estimation of the ohmic resistance increase and capacity loss, respectively, the ohmic resistance at nominal temperature will be taken as a health indicator, and the capacity loss is estimated based on a mechanistic model that is developed to describe the correlation between resistance increase and capacity loss; (2 it has wide applicability to various ambient temperatures—to eliminate the effects of temperature on the resistance, another mechanistic model about the resistance against temperature is presented, which can normalize the resistance at various temperatures to its standard value at the nominal temperature; and (3 it needs low computational efforts for on-board application—based on a linear equation of cell’s dynamic behaviors, the recursive least-squares (RLS algorithm is used for the resistance estimation. Based on the designed performance and validation experiments, respectively, the coefficients of the models are determined and the accuracy of the proposed method is verified. The results at different aging states and temperatures show good accuracy and reliability.
Donk, Roland D; Fehlings, Michael G; Verhagen, Wim I M; Arnts, Hisse; Groenewoud, Hans; Verbeek, André L M; Bartels, Ronald H M A
2017-05-01
OBJECTIVE Although there is increasing recognition of the importance of cervical spinal sagittal balance, there is a lack of consensus as to the optimal method to accurately assess the cervical sagittal alignment. Cervical alignment is important for surgical decision making. Sagittal balance of the cervical spine is generally assessed using one of two methods; namely, measuring the angle between C-2 and C-7, and drawing a line between C-2 and C-7. Here, the best method to assess sagittal alignment of the cervical spine is investigated. METHODS Data from 138 patients enrolled in a randomized controlled trial (Procon) were analyzed. Two investigators independently measured the angle between C-2 and C-7 by using Harrison's posterior tangent method, and also estimated the shape of the sagittal curve by using a modified Toyama method. The mean angles of each quantitative assessment of the sagittal alignment were calculated and the results were compared. The interrater reliability for both methods was estimated using Cronbach's alpha. RESULTS For both methods the interrater reliability was high: for the posterior tangent method it was 0.907 and for the modified Toyama technique it was 0.984. For a lordotic cervical spine, defined by the modified Toyama method, the mean angle (defined by Harrison's posterior tangent method) was 23.4° ± 9.9° (range 0.4°-52.4°), for a kyphotic cervical spine it was -2.2° ± 9.2° (range -16.1° to 16.9°), and for a straight cervical spine it was 10.5° ± 8.2° (range -11° to 36°). CONCLUSIONS An absolute measurement of the angle between C-2 and C-7 does not unequivocally define the sagittal cervical alignment. As can be seen from the minimum and maximum values, even a positive angle between C-2 and C-7 could be present in a kyphotic spine. For this purpose, the modified Toyama method (drawing a line from the posterior inferior part of the vertebral body of C-2 to the posterior upper part of the vertebral body of C-7 without any
A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.
Song, Yu; Nuske, Stephen; Scherer, Sebastian
2016-12-22
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.
Gregory P. Asner; Michael Palace; Michael Keller; Rodrigo Pereira Jr.; Jose N. M. Silva; Johan C. Zweede
2002-01-01
Canopy structural data can be used for biomass estimation and studies of carbon cycling, disturbance, energy balance, and hydrological processes in tropical forest ecosystems. Scarce information on canopy dimensions reflects the difficulties associated with measuring crown height, width, depth, and area in tall, humid tropical forests. New field and spaceborne...
Youlong XIA; Zong-Liang YANG; Paul L. STOFFA; Mrinal K. SEN
2005-01-01
Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI)to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing.The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes.Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.
Yang, Ming; Zhu, X Ronald; Park, Peter C; Titt, Uwe; Mohan, Radhe; Virshup, Gary; Clayton, James E; Dong, Lei
2012-07-07
The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0-3.4%, primarily because soft tissue is the dominant tissue type in the human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction.
L Max Tarjan
Full Text Available Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or "PHRE" that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore. Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.
Tarjan, Lily M; Tinker, M. Tim
2016-01-01
Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or “PHRE”) that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore). Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.
Janssen, FMFC; Landry, G.; Cambraia Lopes, P.; Dedes, G.; Smeets, J.; Schaart, D. R.; Parodi, K.; Verhaegen, F.
2014-08-01
In-vivo imaging is a strategy to monitor the range of protons inside the patient during radiation treatment. A possible method of in-vivo imaging is detection of secondary ‘prompt’ gamma (PG) photons outside the body, which are produced by inelastic proton-nuclear interactions inside the patient. In this paper, important parameters influencing the relationship between the PG profile and percentage depth dose (PDD) in a uniform cylindrical phantom are explored. Monte Carlo simulations are performed with the new Geant4 based code TOPAS for mono-energetic proton pencil beams (range: 100-250 MeV) and an idealized PG detector. PG depth profiles are evaluated using the inflection point on a sigmoid fit in the fall-off region of the profile. A strong correlation between the inflection point and the proton range determined from the PDD is found for all conditions. Variations between 1.5 mm and 2.7 mm in the distance between the proton range and the inflection point are found when either the mass density, phantom diameter, or detector acceptance angle is changed. A change in cut-off energy of the detector could induce a range difference of maximum 4 mm. Applying time-of-flight discrimination during detection, changing the primary energy of the beam or changing the elemental composition of the tissue affects the accuracy of the range prediction by less than 1 mm. The results indicate that the PG signal is rather robust to many parameter variations, but millimetre accurate range monitoring requires all medium and detector properties to be carefully taken into account.
Diniz, Ana; Barreiros, João; Crato, Nuno
2010-01-01
Repetitive movements lead to isochronous serial interval production which exhibit inherent variability. The Wing-Kristofferson model offers a decomposition of the interresponse intervals in tapping tasks based on a cognitive component and on a motor component. We suggest a new theoretical and fully parametric approach to this model in which the cognitive component is modeled as a long-memory process and the motor component is treated as a white noise process, mutually independent. Under these assumptions, we obtained the autocorrelation function and the spectral density function. Furthermore, we propose an estimator based on the maximization of the frequency-domain representation of the likelihood function. Finally, we conducted a simulation study to assess the properties of this estimator and performed an experimental study involving tapping tasks with two target frequencies (1.250 Hz and 0.625 Hz).
Li Chenlei
2014-10-01
Full Text Available Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar (SAR, which is important for ground moving target indication (GMTI. Because the velocity of a target is very small compared with that of the satellite, it is difficult to correctly estimate it using a conventional monostatic platform algorithm. To overcome this problem, a novel method employing multistatic SAR is presented in this letter. The proposed hybrid method, which is based on an extended space-time model (ESTIM of the azimuth signal, has two steps: first, a set of finite impulse response (FIR filter banks based on a fractional Fourier transform (FrFT is used to separate multiple targets within a range gate; second, a cross-correlation spectrum weighted subspace fitting (CSWSF algorithm is applied to each of the separated signals in order to estimate their respective parameters. As verified through computer simulation with the constellations of Cartwheel, Pendulum and Helix, this proposed time-frequency-subspace method effectively improves the estimation precision of the cross-range velocities of multiple targets.
Estimation of intra-operative brain shift using a tracked laser range scanner.
Ding, Siyi; Miga, Michael I; Thompson, Reid C; Dumpuri, Prashanth; Cao, Aize; Dawant, Benoit M
2007-01-01
Intra-operative brain shift limits the usefulness of image-guided neurosurgery systems (IGNS), which are based on pre-operative images. Methods that are being developed to address this problem need intra-operative measurements as input. In this work, we present an intra-operative surface shift measurement technique that relies on a tracked 3D laser range scanner. This scanner acquires both 3D range data and 2D images, which are co-registered. We compare two methods to derive displacements at every point in the field of view. The first one relies on the registration of the 2D images; the second relies on the direct 3D registration of the 3D range data. Our results, based on five data sets, show that the 2D method is preferable.
Xiaopeng Song
2013-07-01
Full Text Available The DOA (direction of arrival estimation of seismic signals from the moving target on the ground bears great significance for unattended ground systems. The traditional DOA estimation of seismic signals is achieved by a sensor array and its corresponding algorithms. MEMS (Micro-Electro- Mechanical Systems vector vibration sensor, however, gets the vector information over the propagation of seismic signals and therefore can get a DOA estimation within a certain range through a single vector sensor. This paper proposes a new method to extend the orientation range through the rotation of the MEMS vector vibration axis. The experiment shows that this method shares the merits with simple systematic structure, high sensitivity and less than 5 degrees of error on average, which has an extensive wide application prospect.
Pifer, Alburt E.; Hiscox, William L.; Cummins, Kenneth L.; Neumann, William T.
1991-01-01
Gated, wideband, magnetic direction finders (DFs) were originally designed to measure the bearing of cloud-to-ground lightning relative to the sensor. A recent addition to this device uses proprietary waveform discrimination logic to select return stroke signatures and certain range dependent features in the waveform to provide an estimate of range of flashes within 50 kms. The enhanced ranging techniques are discussed which were designed and developed for use in single station thunderstorm warning sensor. Included are the results of on-going evaluations being conducted under a variety of meteorological and geographic conditions.
IMPACT RANGE ESTIMATION OF POLLUTED SOIL AREA FOR RADIATION MONITORINGBY «IN SITU» METHOD
A. Zhukouski
2014-01-01
Full Text Available Intensity of «direct» gamma-quanta absorbed in cylindrical detector has been determined for the detector placed over soil uniformly contaminated in depth. The dependence of impact range radius for NaI(Tl detector to cesium depth contamination is established for soil from Tohoku region, Japan.
Estimating and Forecasting Asset Volatility and Its Volatility: A Markov-Switching Range Model
Piplack, J.
2009-01-01
This paper proposes a new model for modeling and forecasting the volatility of asset markets. We suggest to use the log range defined as the natural logarithm of the difference of the maximum and the minimum price observed for an asset within a certain period of time, i.e. one trading week. There is
Rainfall erosivity estimation based on rainfall data collected over a range of temporal resolutions
S. Yin
2015-05-01
Full Text Available Rainfall erosivity is the power of rainfall to cause soil erosion by water. The rainfall erosivity index for a rainfall event, EI30, is calculated from the total kinetic energy and maximum 30 min intensity of individual events. However, these data are often unavailable in many areas of the world. The purpose of this study was to develop models that relate more commonly available rainfall data resolutions, such as daily or monthly totals, to rainfall erosivity. Eleven stations with one-minute temporal resolution rainfall data collected from 1961 through 2000 in the eastern water-erosion areas of China were used to develop and calibrate 21 models. Seven independent stations, also with one-minute data, were utilized to validate those models, together with 20 previously published equations. Results showed that models in this study performed better or similar to models from previous research to estimate rainfall erosivity for these data. Prediction capabilities, as determined using symmetric mean absolute percentage errors and Nash–Sutcliffe model efficiency coefficients, were demonstrated for the 41 models including those for estimating erosivity at event, daily, monthly, yearly, average monthly and average annual time scales. Prediction capabilities were generally better using higher resolution rainfall data as inputs. For example, models with rainfall amount and maximum 60 min rainfall amount as inputs performed better than models with rainfall amount and maximum daily rainfall amount, which performed better than those with only rainfall amount. Recommendations are made for choosing the appropriate estimation equation, which depend on objectives and data availability.
Haupt, Lois J; Kazmi, Faraz; Ogilvie, Brian W; Buckley, David B; Smith, Brian D; Leatherman, Sarah; Paris, Brandy; Parkinson, Oliver; Parkinson, Andrew
2015-11-01
In the present study, we conducted a retrospective analysis of 343 in vitro experiments to ascertain whether observed (experimentally determined) values of Ki for reversible cytochrome P450 (P450) inhibition could be reliably predicted by dividing the corresponding IC₅₀ values by two, based on the relationship (for competitive inhibition) in which Ki = IC₅₀/2 when [S] (substrate concentration) = Km (Michaelis-Menten constant). Values of Ki and IC₅₀ were determined under the following conditions: 1) the concentration of P450 marker substrate, [S], was equal to Km (for IC₅₀ determinations) and spanned Km (for Ki determinations); 2) the substrate incubation time was short (5 minutes) to minimize metabolism-dependent inhibition and inhibitor depletion; and 3) the concentration of human liver microsomes was low (0.1 mg/ml or less) to maximize the unbound fraction of inhibitor. Under these conditions, predicted Ki values, based on IC₅₀/2, correlated strongly with experimentally observed Ki determinations [r = 0.940; average fold error (AFE) = 1.10]. Of the 343 predicted Ki values, 316 (92%) were within a factor of 2 of the experimentally determined Ki values, and only one value fell outside a 3-fold range. In the case of noncompetitive inhibitors, Ki values predicted from IC₅₀/2 values were overestimated by a factor of nearly 2 (AFE = 1.85; n = 13), which is to be expected because, for noncompetitive inhibition, Ki = IC₅₀ (not IC₅₀/2). The results suggest that, under appropriate experimental conditions with the substrate concentration equal to Km, values of Ki for direct, reversible inhibition can be reliably estimated from values of IC₅₀/2.
Huang, Liping; Crino, Michelle; Wu, Jason HY; Woodward, Mark; Land, Mary-Anne; McLean, Rachael; Webster, Jacqui; Enkhtungalag, Batsaikhan; Nowson, Caryl A; Elliott, Paul; Cogswell, Mary; Toft, Ulla; MILL, Jose G.; Furlanetto,Tania W.; Ilich, Jasminka Z.
2016-01-01
Background Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. Objective The aim of this study is to identify a reliable method for estimating mean population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effect...
Harnessing Big-Data for Estimating the Energy Consumption and Driving Range of Electric Vehicles
Fetene, Gebeyehu Manie; Prato, Carlo Giacomo; Kaplan, Sigal
-effects econometrics model used in this paper predicts that the energy saving speed of driving is between 45 and 56 km/h. In addition to the contribution to the literature about energy efficiency of electric vehicles, the findings from this study enlightens consumers to choose appropriate cars that suit their travel......This study analyses the driving range and investigates the factors affecting the energy consumption rate of fully-battery electric vehicles under real-world driving patterns accounting for weather condition, drivers’ characteristics, and road characteristics. Four data sources are used: (i) up...
Estimating the generation interval of influenza A (H1N1) in a range of social settings.
te Beest, Dennis E; Wallinga, Jacco; Donker, Tjibbe; van Boven, Michiel
2013-03-01
A proper understanding of the infection dynamics of influenza A viruses hinges on the availability of reliable estimates of key epidemiologic parameters such as the reproduction number, intrinsic growth rate, and generation interval. Often the generation interval is assumed to be similar in different settings although there is little evidence justifying this. Here we estimate the generation interval for stratifications based on age, cluster size, and social setting (camp, school, workplace, household) using data from 16 clusters of Novel Influenza A (H1N1) in the Netherlands. Our analyses are based on a Bayesian inferential framework, enabling flexible handling of both missing infection links and missing times of symptoms onset. The analysis indicates that a stratification that allows the generation interval to differ by social setting fits the data best. Specifically, the estimated generation interval was shorter in households (2.1 days [95% credible interval = 1.6-2.9]) and camps (2.3 days [1.4-3.4]) than in workplaces (2.7 days [1.9-3.7]) and schools (3.4 days [2.5-4.5]). Our findings could be the result of differences in the number of contacts between settings, differences in prophylactic use of antivirals between settings, and differences in underreporting.
Wiens, J. David; Kolar, Patrick S.; Fuller, Mark R.; Hunt, W. Grainger; Hunt, Teresa
2015-01-01
We used a multistate occupancy sampling design to estimate occupancy, breeding success, and abundance of territorial pairs of golden eagles (Aquila chrysaetos) in the Diablo Range, California, in 2014. This method uses the spatial pattern of detections and non-detections over repeated visits to survey sites to estimate probabilities of occupancy and successful reproduction while accounting for imperfect detection of golden eagles and their young during surveys. The estimated probability of detecting territorial pairs of golden eagles and their young was less than 1 and varied with time of the breeding season, as did the probability of correctly classifying a pair’s breeding status. Imperfect detection and breeding classification led to a sizeable difference between the uncorrected, naïve estimate of the proportion of occupied sites where successful reproduction was observed (0.20) and the model-based estimate (0.30). The analysis further indicated a relatively high overall probability of landscape occupancy by pairs of golden eagles (0.67, standard error = 0.06), but that areas with the greatest occupancy and reproductive potential were patchily distributed. We documented a total of 138 territorial pairs of golden eagles during surveys completed in the 2014 breeding season, which represented about one-half of the 280 pairs we estimated to occur in the broader 5,169-square kilometer region sampled. The study results emphasize the importance of accounting for imperfect detection and spatial heterogeneity in studies of site occupancy, breeding success, and abundance of golden eagles.
Spatio-temporal-based joint range and angle estimation for wideband signals
Villemin, Guilhem; Fossati, Caroline; Bourennane, Salah
2013-12-01
Object localization using active sensor network exploiting the scattering of the emitted waves by a transmitter has been drawing a lot of research interest in the last years. For most applications, the environment leads to the arrival of multiple signals corresponding to emitted signal, signals which are scattered by the objects, and noise. In practical systems, the signals impinging on an array are frequently correlated, and the object number rapidly exceeds the number of sensors, making unsuitable most high-resolution methods used in array processing. We propose a solution to overcome these two experimental constraints. Firstly, frequential smoothing is used to decorrelate the scattered signals, enabling the estimation of their time delays of arrival (TDOA), using subspace-based methods. Secondly, an efficient algorithm for source localization using the TDOA is proposed. The advantage of the developed method is its efficiency even if the number of sources is larger than the number of sensors, in the presence of correlated signals. The performances of the proposed method are assessed on simulated signals. The results on real-world data are also presented and analyzed.
Li, Yan; Chen, Jianjun; Liu, Jipeng; Zhang, Lei; Wang, Weiguo; Zhang, Shaofeng
2013-09-01
The reliability of all-ceramic crowns is of concern to both patients and doctors. This study introduces a new methodology for quantifying the reliability of all-ceramic crowns based on the stress-strength interference theory and finite element models. The variables selected for the reliability analysis include the magnitude of the occlusal contact area, the occlusal load and the residual thermal stress. The calculated reliabilities of crowns under different loading conditions showed that too small occlusal contact areas or too great a difference of the thermal coefficient between veneer and core layer led to high failure possibilities. There results were consistent with many previous reports. Therefore, the methodology is shown to be a valuable method for analyzing the reliabilities of the restorations in the complicated oral environment.
Estimating indices of range shifts in birds using dynamic models when detection is imperfect
Clement, Matthew J.; Hines, James E.; Nichols, James D.; Pardieck, Keith L.; Ziolkowski, David J.
2016-01-01
There is intense interest in basic and applied ecology about the effect of global change on current and future species distributions. Projections based on widely used static modeling methods implicitly assume that species are in equilibrium with the environment and that detection during surveys is perfect. We used multiseason correlated detection occupancy models, which avoid these assumptions, to relate climate data to distributional shifts of Louisiana Waterthrush in the North American Breeding Bird Survey (BBS) data. We summarized these shifts with indices of range size and position and compared them to the same indices obtained using more basic modeling approaches. Detection rates during point counts in BBS surveys were low, and models that ignored imperfect detection severely underestimated the proportion of area occupied and slightly overestimated mean latitude. Static models indicated Louisiana Waterthrush distribution was most closely associated with moderate temperatures, while dynamic occupancy models indicated that initial occupancy was associated with diurnal temperature ranges and colonization of sites was associated with moderate precipitation. Overall, the proportion of area occupied and mean latitude changed little during the 1997–2013 study period. Near-term forecasts of species distribution generated by dynamic models were more similar to subsequently observed distributions than forecasts from static models. Occupancy models incorporating a finite mixture model on detection – a new extension to correlated detection occupancy models – were better supported and may reduce bias associated with detection heterogeneity. We argue that replacing phenomenological static models with more mechanistic dynamic models can improve projections of future species distributions. In turn, better projections can improve biodiversity forecasts, management decisions, and understanding of global change biology.
The Influence of Study Species Selection on Estimates of Pesticide Exposure in Free-Ranging Birds
Borges, Shannon L.; Vyas, Nimish B.; Christman, Mary C.
2014-02-01
Field studies of pesticide effects on birds often utilize indicator species with the purpose of extrapolating to other avian taxa. Little guidance exists for choosing indicator species to monitor the presence and/or effects of contaminants that are labile in the environment or body, but are acutely toxic, such as anticholinesterase (anti-ChE) insecticides. Use of an indicator species that does not represent maximum exposure and/or effects could lead to inaccurate risk estimates. Our objective was to test the relevance of a priori selection of indicator species for a study on pesticide exposure to birds inhabiting fruit orchards. We used total plasma ChE activity and ChE reactivation to describe the variability in anti-ChE pesticide exposure among avian species in two conventionally managed fruit orchards. Of seven species included in statistical analyses, the less common species, chipping sparrow ( Spizella passerina), showed the greatest percentage of exposed individuals and the greatest ChE depression, whereas the two most common species, American robins ( Turdus migratorius) and gray catbirds ( Dumatella carolinensis), did not show significant exposure. Due to their lower abundance, chipping sparrows would have been an unlikely choice for study. Our results show that selection of indicator species using traditionally accepted criteria such as abundance and ease of collection may not identify species that are at greatest risk. Our efforts also demonstrate the usefulness of conducting multiple-species pilot studies prior to initiating detailed studies on pesticide effects. A study such as ours can help focus research and resources on study species that are most appropriate.
Perez Sanchez-Canete, Enrique; Scott, Russell L.; Barron-Gafford, Greg; van Haren, Joost
2016-04-01
Soil CO2 fluxes represent a major source of CO2 emissions, where small changes in their estimation provoke large changes in the quantification of the global carbon cycle. Recently, the gradient method that employs soil CO2 probes at multiple depths has been offered as a way to inexpensively and continuously measure soil CO2 flux. However, the use of the gradient method can yield inappropriate flux estimates due to the uncertainties mainly associated with the inappropriate determination of the soil diffusion coefficient. Therefore, in-situ methods to determine diffusion coefficient are necessary to obtain accurate CO2 fluxes. Here the data obtained during one year with two automatic soil CO2 chambers along with CO2 molar fraction data from 4 probes at 10 cm depth, were used to determine a model of soil diffusion coefficient (Ds), which was applied later to obtain the soil CO2 fluxes by the gradient method. Another Ds model was obtained by injection and sampling of SF6 during several campaigns with different soil water content levels. Both Ds models obtained in situ were compared with another 13 Ds models published. We addressed three questions: 1) Can we use a previously published model, or do we need to determine Ds in situ? 2) How accurate are the CO2 fluxes estimates obtained by the gradient method for different Ds models, compared with chamber-measured CO2 fluxes? 3) Can we take a limited number of chamber measurements to obtain a good Ds model, or we need longer calibration periods? Comparing the cumulative soil respiration for the different diffusion models, we found that the model with empirical calibration to the soil chambers had the best agreement with the chamber fluxes (SF6 model underestimated by chamber fluxes by 23% and the published models ranged from an underestimate of 78% to an overestimate of 14%. Most importantly, we found that a few days of measurements with a soil respiration chamber (with widely varying soil water content) are enough to build
Díaz Gómez, Juan Manuel
2011-01-01
Establishing the ancestral ranges of distribution of a monophyletic clade, called the ancestral area, is one of the central objectives of historical biogeography. In this study, I used three common methodologies to establish the ancestral area of an important clade of Neotropical lizards, the family Liolaemidae. The methods used were: Fitch optimization, Weighted Ancestral Area Analysis and Dispersal-Vicariance Analysis (DIVA). A main difference from previous studies is that the areas used in the analysis are defined based on actual distributions of the species of Liolaemidae, instead of areas defined arbitrarilyor based on other taxa. The ancestral area of Liolaemidae found by Fitch optimization is Prepuna on Argentina, Central Chile and Coastal Peru. Weighted Ancestral Area Analysis found Central Chile, Coquimbo, Payunia, Austral Patagonia and Coastal Peru. Dispersal-Vicariance analysis found an ancestral area that includes almost all the areas occupied by Liolaemidae, except Atacama, Coquimbo and Austral Patagonia. The results can be resumed on two opposing hypothesis: a restricted ancestral area for the ancestor of Liolaemidae in Central Chile and Patagonia, or a widespread ancestor distributed along the Andes. Some limitations of the methods were identified, for example the excessive importance of plesiomorphic areas in the cladograms. PMID:22028873
Mohammad Reza Pourahmadi
2016-08-01
Full Text Available Background Measurement of lumbar spine range of motion (ROM is often considered to be an essential component of lumbar spine physiotherapy and orthopedic assessment. The measurement can be carried out through various instruments such as inclinometers, goniometers, and etc. Recent smartphones have been equipped with accelerometers and magnetometers, which, through specific software applications (apps can be used for inclinometric functions. Purpose The main purpose was to investigate the reliability and validity of an iPhone® app (TiltMeter© -advanced level and inclinometer for measuring standing lumbar spine flexion–extension ROM in asymptomatic subjects. Design A cross-sectional study was carried out. Setting This study was conducted in a physiotherapy clinic located at School of Rehabilitation Sciences, Iran University of Medical Science and Health Services, Tehran, Iran. Subjects A convenience sample of 30 asymptomatic adults (15 males; 15 females; age range = 18–55 years was recruited between August 2015 and December 2015. Methods Following a 2–minute warm-up, the subjects were asked to stand in a relaxed position and their skin was marked at the T12–L1 and S1–S2 spinal levels. From this position, they were asked to perform maximum lumbar flexion followed by maximum lumbar extension with their knees straight. Two blinded raters each used an inclinometer and the iPhone ® app to measure lumbar spine flexion–extension ROM. A third rater read the measured angles. To calculate total lumbar spine flexion–extension ROM, the measurement from S1–S2 was subtracted from T12–L1. The second (2 hours later and third (48 hours later sessions were carried out in the same manner as the first session. All of the measurements were conducted 3 times and the mean value of 3 repetitions for each measurement was used for analysis. Intraclass correlation coefficient (ICC models (3, k and (2, k were used to determine the intra-rater and inter
Pourahmadi, Mohammad Reza; Jannati, Elham; Mohseni-Bandpei, Mohammad Ali; Ebrahimi Takamjani, Ismail; Rajabzadeh, Fatemeh
2016-01-01
Background Measurement of lumbar spine range of motion (ROM) is often considered to be an essential component of lumbar spine physiotherapy and orthopedic assessment. The measurement can be carried out through various instruments such as inclinometers, goniometers, and etc. Recent smartphones have been equipped with accelerometers and magnetometers, which, through specific software applications (apps) can be used for inclinometric functions. Purpose The main purpose was to investigate the reliability and validity of an iPhone® app (TiltMeter© -advanced level and inclinometer) for measuring standing lumbar spine flexion–extension ROM in asymptomatic subjects. Design A cross-sectional study was carried out. Setting This study was conducted in a physiotherapy clinic located at School of Rehabilitation Sciences, Iran University of Medical Science and Health Services, Tehran, Iran. Subjects A convenience sample of 30 asymptomatic adults (15 males; 15 females; age range = 18–55 years) was recruited between August 2015 and December 2015. Methods Following a 2–minute warm-up, the subjects were asked to stand in a relaxed position and their skin was marked at the T12–L1 and S1–S2 spinal levels. From this position, they were asked to perform maximum lumbar flexion followed by maximum lumbar extension with their knees straight. Two blinded raters each used an inclinometer and the iPhone ® app to measure lumbar spine flexion–extension ROM. A third rater read the measured angles. To calculate total lumbar spine flexion–extension ROM, the measurement from S1–S2 was subtracted from T12–L1. The second (2 hours later) and third (48 hours later) sessions were carried out in the same manner as the first session. All of the measurements were conducted 3 times and the mean value of 3 repetitions for each measurement was used for analysis. Intraclass correlation coefficient (ICC) models (3, k) and (2, k) were used to determine the intra-rater and inter
Prechtel, Austin R.; Coulter, Alison A.; Etchison, Luke; Jackson, P. Ryan; Goforth, Reuben R.
2017-01-01
Unregulated rivers provide unobstructed corridors for the dispersal of both native and invasive species. We sought to evaluate range size and habitat use of an invasive species (Silver Carp, Hypophthalmichthys molitrix) in an unimpounded river reach (Wabash River, IN), to provide insights into the dispersal of invasive species and their potential overlap with native species. We hypothesized that range size would increase with fish length, be similar among sexes, and vary annually while habitats used would be deeper, warmer, lower velocity, and of finer substrate. Silver Carp habitat use supported our hypotheses but range size did not vary with sex or length. 75% home range varied annually, suggesting that core areas occupied by individuals may change relative to climate-based factors (e.g., water levels), whereas broader estimates of range size remained constant across years. Ranges were often centered on landscape features such as tributaries and backwaters. Results of this study indicate habitat and landscape features as potential areas where Silver Carp impacts on native ecosystems may be the greatest. Observed distribution of range sizes indicates the presence of sedentary and mobile individuals within the population. Mobile individuals may be of particular importance as they drive the spread of the invasive species into new habitats.
Quasi-additive estimates on the Hamiltonian for the one-dimensional long range Ising model
Littin, Jorge; Picco, Pierre
2017-07-01
In this work, we study the problem of getting quasi-additive bounds for the Hamiltonian of the long range Ising model, when the two-body interaction term decays proportionally to 1/d2 -α , α ∈(0,1 ) . We revisit the paper by Cassandro et al. [J. Math. Phys. 46, 053305 (2005)] where they extend to the case α ∈[0 ,ln3/ln2 -1 ) the result of the existence of a phase transition by using a Peierls argument given by Fröhlich and Spencer [Commun. Math. Phys. 84, 87-101 (1982)] for α =0 . The main arguments of Cassandro et al. [J. Math. Phys. 46, 053305 (2005)] are based in a quasi-additive decomposition of the Hamiltonian in terms of hierarchical structures called triangles and contours, which are related to the original definition of contours introduced by Fröhlich and Spencer [Commun. Math. Phys. 84, 87-101 (1982)]. In this work, we study the existence of a quasi-additive decomposition of the Hamiltonian in terms of the contours defined in the work of Cassandro et al. [J. Math. Phys. 46, 053305 (2005)]. The most relevant result obtained is Theorem 4.3 where we show that there is a quasi-additive decomposition for the Hamiltonian in terms of contours when α ∈[0,1 ) but not in terms of triangles. The fact that it cannot be a quasi-additive bound in terms of triangles lead to a very interesting maximization problem whose maximizer is related to a discrete Cantor set. As a consequence of the quasi-additive bounds, we prove that we can generalise the [Cassandro et al., J. Math. Phys. 46, 053305 (2005)] result, that is, a Peierls argument, to the whole interval α ∈[0,1 ) . We also state here the result of Cassandro et al. [Commun. Math. Phys. 327, 951-991 (2014)] about cluster expansions which implies that Theorem 2.4 that concerns interfaces and Theorem 2.5 that concerns n point truncated correlation functions in Cassandro et al. [Commun. Math. Phys. 327, 951-991 (2014)] are valid for all α ∈[0,1 ) instead of only α ∈[0 ,ln3/ln2 -1 ) .
Paul Treitz
2013-10-01
Full Text Available Leaf Area Index (LAI is an important input variable for forest ecosystem modeling as it is a factor in predicting productivity and biomass, two key aspects of forest health. Current in situ methods of determining LAI are sometimes destructive and generally very time consuming. Other LAI derivation methods, mainly satellite-based in nature, do not provide sufficient spatial resolution or the precision required by forest managers for tactical planning. This paper focuses on estimating LAI from: (i height and density metrics derived from Light Detection and Ranging (LiDAR; (ii spectral vegetation indices (SVIs, in particular the Normalized Difference Vegetation Index (NDVI; and (iii a combination of these methods. For the Hearst Forest of Northern Ontario, in situ measurements of LAI were derived from digital hemispherical photographs (DHPs while remote sensing variables were derived from low density LiDAR (i.e., 1 m−2 and high spatial resolution WorldView-2 data (2 m. Multiple Linear Regression (MLR models were generated using these variables. Results from these analyses demonstrate: (i moderate explanatory power (i.e., R2 = 0.53 for LiDAR height and density metrics that have proven to be related to canopy structure; (ii no relationship when using SVIs; and (iii no significant improvement of LiDAR models when combining them with SVI variables. The results suggest that LiDAR models in boreal forest environments provide satisfactory estimations of LAI, even with narrow ranges of LAI for model calibration. Models derived from low point density LiDAR in a mixedwood boreal environment seem to offer a reliable method of estimating LAI at high spatial resolution for decision makers in the forestry community. This method can be easily incorporated into simultaneous modeling efforts for forest inventory variables using LiDAR.
Xiong, Wanting; Faes, Luca; Ivanov, Plamen Ch.
2017-06-01
Entropy measures are widely applied to quantify the complexity of dynamical systems in diverse fields. However, the practical application of entropy methods is challenging, due to the variety of entropy measures and estimators and the complexity of real-world time series, including nonstationarities and long-range correlations (LRC). We conduct a systematic study on the performance, bias, and limitations of three basic measures (entropy, conditional entropy, information storage) and three traditionally used estimators (linear, kernel, nearest neighbor). We investigate the dependence of entropy measures on estimator- and process-specific parameters, and we show the effects of three types of nonstationarities due to artifacts (trends, spikes, local variance change) in simulations of stochastic autoregressive processes. We also analyze the impact of LRC on the theoretical and estimated values of entropy measures. Finally, we apply entropy methods on heart rate variability data from subjects in different physiological states and clinical conditions. We find that entropy measures can only differentiate changes of specific types in cardiac dynamics and that appropriate preprocessing is vital for correct estimation and interpretation. Demonstrating the limitations of entropy methods and shedding light on how to mitigate bias and provide correct interpretations of results, this work can serve as a comprehensive reference for the application of entropy methods and the evaluation of existing studies.
Appleby, Graham; Rodríguez, José; Altamimi, Zuheir
2016-12-01
Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
Urbanová, Petra; Ross, Ann H; Jurda, Mikoláš; Nogueira, Maria-Ines
2014-09-01
In the framework of forensic anthropology osteometric techniques are generally preferred over visual examinations due to a higher level of reproducibility and repeatability; qualities that are crucial within a legal context. The use of osteometric methods has been further reinforced by incorporating statistically-based algorithms and large reference samples in a variety of user-friendly software applications. However, the continued increase in admixture of human populations have made the use of osteometric methods for estimation of ancestry much more complex, which confounds one of major requirements of ancestry assessment - intra-population homogeneity. The present paper tests the accuracy of ancestry and sex assessment using four identification software tools, specifically FORDISC 2.0, FORDISC 3.1.293, COLIPR 1.5.2 and 3D-ID 1.0. Software accuracy was tested in a sample of 174 documented human crania of Brazilian origin composed of different ancestral groups (i.e., European Brazilians, Afro-Brazilians, and Japanese Brazilians and of admixed ancestry). The results show that regardless of the software algorithm employed and composition of the reference database, all methods were able to allocate approximately 50% of Brazilian specimens to an appropriate major reference group. Of the three ancestral groups, Afro-Brazilians were especially prone to misclassification. Japanese Brazilians, by contrast, were shown to be relatively easily recognizable as being of Asian descent but at the same time showed a strong affinity towards Hispanic crania, in particularly when the classification based on FDB was carried out in FORDISC. For crania of admixed origin all of the algorithms showed a considerable higher rate of inconsistency with a tendency for misclassification into Asian and American Hispanic groups. Sex assessments revealed an overall modest to poor reliability (60-71% of correctly classified specimens) using the tested software programs with unbalanced individual
Andreas Berner; Tariq Saleem Alharbi; Eric Carlstrm; Amir Khorram-Manesh
2015-01-01
Objective: To develop a validated and generalized high reliability organizations collaborative tool in order to conduct common assessments and information sharing of potential risks during mass-gatherings. Methods: The Swedish resource and risk estimation guide was used as foundation for the development of the generalized collaborative tool, by three different expert groups, and then analyzed. Analysis of inter-rater reliability was conducted through simulated cases that showed weighted and unweightκ-statistics. Results:The results revealed a mean of unweightκ-value from the three cases of 0.37 and a mean accuracy of 62%of the tool. Conclusions:The collaboration tool,“STREET”, showed acceptable reliability and validity to be used as a foundation for high reliability organization collaboration in a simulated environment. However, the lack of reliability in one of the cases highlights the challenges of creating measurable values from simulated cases. A study on real events can provide higher reliability but need, on the other hand, an already developed tool.
Camici, Stefania; Ciabatta, Luca; Massari, Christian; Brocca, Luca
2017-04-01
, TMPA 3B42-RT, CMORPH, PERSIANN and a new soil moisture-derived rainfall datasets obtained through the application of SM2RAIN algorithm (Brocca et al., 2014) to ASCAT (Advanced SCATterometer) soil moisture product are used in the analysis. The performances obtained with SRPs are compared with those obtained by using ground data during the 6-year period from 2010 to 2015. In addition, the performance obtained by an integration of the above mentioned SRPs is also investigated to see whether merged rainfall observations are able to improve flood simulation. Preliminary analysis were also carried out by using the IMERG early run product of GPM mission. The results highlight that SRPs should be used with caution for rainfall-runoff modelling in the Mediterranean region. Bias correction and model recalibration are necessary steps, even though not always sufficient to achieve satisfactory performances. Indeed, some of the products provide unreliable outcomes, mainly in smaller basins (<500 km2) that, however, represent the main target for flood modelling in the Mediterranean area. The better performances are obtained by integrating different SRPs, and particularly by merging TMPA 3B42-RT and SM2RAIN-ASCAT products. The promising results of the integrated product are expected to increase the confidence on the use of SRPs in hydrological modeling, even in challenging areas as the Mediterranean. REFERENCES Brocca, L., Ciabatta, L., Massari, C., Moramarco, T., Hahn, S., Hasenauer, S., Kidd, R., Dorigo, W., Wagner, W., Levizzani, V. (2014). Soil as a natural rain gauge: estimating global rainfall from satellite soil moisture data. Journal of Geophysical Research, 119(9), 5128-5141, doi:10.1002/2014JD021489. Masseroni, D., Cislaghi, A., Camici, S., Massari, C., Brocca, L. (2017). A reliable rainfall-runoff model for flood forecasting: review and application to a semiurbanized watershed at high flood risk in Italy. Hydrology Research, in press, doi:10.2166/nh.2016.037.
A fast-reliable methodology to estimate the concentration of rutile or anatase phases of TiO2
Zanatta, A. R.
2017-07-01
Titanium-dioxide (TiO2) is a low-cost, chemically inert material that became the basis of many modern applications ranging from, for example, cosmetics to photovoltaics. TiO2 exists in three different crystal phases - Rutile, Anatase and, less commonly, Brookite - and, in most of the cases, the presence or relative amount of these phases are essential to decide the TiO2 final application and its related efficiency. Traditionally, X-ray diffraction has been chosen to study TiO2 and provides both the phases identification and the Rutile-to-Anatase ratio. Similar information can be achieved from Raman scattering spectroscopy that, additionally, is versatile and involves rather simple instrumentation. Motivated by these aspects this work took into account various TiO2 Rutile+Anatase powder mixtures and their corresponding Raman spectra. Essentially, the method described here was based upon the fact that the Rutile and Anatase crystal phases have distinctive phonon features, and therefore, the composition of the TiO2 mixtures can be readily assessed from their Raman spectra. The experimental results clearly demonstrate the suitability of Raman spectroscopy in estimating the concentration of Rutile or Anatase in TiO2 and is expected to influence the study of TiO2-related thin films, interfaces, systems with reduced dimensions, and devices like photocatalytic and solar cells.
Sasaki, Shunsuke; Araki, Tetsuya
2014-06-01
This article presents informal recycling contributions made by scavengers in the surrounding area of Bantar Gebang final disposal site for municipal solid waste generated in Jakarta. Preliminary fieldwork was conducted through daily conversations with scavengers to identify recycling actors at the site, and then quantitative field surveys were conducted twice. The first survey (n = 504 households) covered 33% of all households in the area, and the second survey (n = 69 households) was conducted to quantify transactions of recyclables among scavengers. Mathematical equations were formulated with assumptions made to estimate the possible range of recycling rates achieved by dump waste pickers. Slightly over 60% of all respondents were involved in informal recycling and over 80% of heads of households were waste pickers, normally referred to as live-in waste pickers and live-out waste pickers at the site. The largest percentage of their spouses were family workers, followed by waste pickers and housewives. Over 95% of all households of respondents had at least one waste picker or one small boss who has a coequal status of a waste picker. Average weight of recyclables collected by waste pickers at the site was estimated to be approximately 100 kg day(-1) per household on the net weight basis. The recycling rate of solid wastes collected by all scavengers at the site was estimated to be in the range of 2.8-7.5% of all solid wastes transported to the site.
Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia
2017-10-01
A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.
Robustness of Estimators of Long-range Dependence and Self-Similarity for Non-Gaussian Datasets.
Watkins, N. W.; Franzke, C. L. E.; Graves, T.; Gramacy, R. B.; Hughes, C.
2012-04-01
Evidence for long-range dependence and non-Gaussianity is ubiquitous in many natural systems like ecosystems, biological systems and climate. However, it is not always appreciated that both phenomena frequently occur together in natural systems, and that self-similarity of a system can result from the superposition of both phenomena. These features, which are common in complex systems, impact the attribution of trends and the occurrence and clustering of extremes. The risk assessment of systems posessing these properties will lead to different outcomes (e.g. return periods) than the more common assumption of independence of extremes. We discuss two paradigmatic models which can simultaneously account for long-range dependence and non-Gaussianity: Autoregressive Fractional Integrated Moving Average (ARFIMA) and Linear Fractional Stable Motion (LFSM). The statistical properties of estimators for long-range dependence and self-similarity are critically assessed as applied to these models. It is seen that the most popular estimators are not robust. In particular, they can be biased in the presence of important features of many natural systems like annual cycles, trends and multiplicative noise. [Related paper in press, Phil. Trans. Roy. Soc. A; preprint at arXiv:1101.5018
Hislop, Jane; Law, James; Rush, Robert; Grainger, Andrew; Bulley, Cathy; Reilly, John J; Mercer, Tom
2014-11-01
The purpose of this study was to determine the number of hours and days of accelerometry data necessary to provide a reliable estimate of habitual physical activity in pre-school children. The impact of a weekend day on reliability estimates was also determined and standard measurement days were defined for weekend and weekdays.Accelerometry data were collected from 112 children (60 males, 52 females, mean (SD) 3.7 (0.7)yr) over 7 d. The Spearman-Brown Prophecy formula (S-B prophecy formula) was used to predict the number of days and hours of data required to achieve an intraclass correlation coefficient (ICC) of 0.7. The impact of including a weekend day was evaluated by comparing the reliability coefficient (r) for any 4 d of data with data for 4 d including one weekend day.Our observations indicate that 3 d of accelerometry monitoring, regardless of whether it includes a weekend day, for at least 7 h d(-1) offers sufficient reliability to characterise total physical activity and sedentary behaviour of pre-school children. These findings offer an approach that addresses the underlying tension in epidemiologic surveillance studies between the need to maintain acceptable measurement rigour and retention of a representatively meaningful sample size.
AOYAMA, Masafumi
2001-01-01
Formative periods of rock glaciers distributed in the Yari-Hotaka Mountain Range, northern Japanese Alps, were estimated from weathering rind thickness. The results suggest that the age of rock glaciers in the Minamisawa-Kita cirque and the most headward of the Tenguppara cirque is between the age of Early Yarisawa Stage II moraines and the Late Yarisawa Stage II moraines, and the age of rock glaciers in the northern part of the Tenguppara cirque and Ohkiretto cirque is same or younger than t...
Estimation of reliability based on zero-failure data%基于无失效数据的可靠度的估计
韩明
2002-01-01
When prior density function of R is in form of π(R |α)∞Rα and 0＜a＜2, the hierarchical Bayes estimation of the product reliability is given under the conditions of the binomial distribution With zero-failure data.%对二项分布无失效数据,在可靠度的先验密度为且时,给出了可靠度的多层Bayes估计.
Scott G. Bauer; Matthew O. Anderson; James R. Hanneman
2005-10-01
The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs require wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmits to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.
Smith, Dianna M; Pearce, Jamie R; Harland, Kirk
2011-03-01
Models created to estimate neighbourhood level health outcomes and behaviours can be difficult to validate as prevalence is often unknown at the local level. This paper tests the reliability of a spatial microsimulation model, using a deterministic reweighting method, to predict smoking prevalence in small areas across New Zealand. The difference in the prevalence of smoking between those estimated by the model and those calculated from census data is less than 20% in 1745 out of 1760 areas. The accuracy of these results provides users with greater confidence to utilize similar approaches in countries where local-level smoking prevalence is unknown.
Alghali, R.; Kamaruddin, A. F.; Mokhtar, N.
2016-12-01
Introduction: The application of forensic odontology using teeth and bones becomes the most commonly used methods to determine age of unknown individuals. Objective: The aim of this study was to determine the reliability of Malay formula of Demirjian and Malay formula of Cameriere methods in determining the dental age that is closely matched with the chronological age of Malay children in Kepala Batas region. Methodology: This is a retrospective cross-sectional study. 126 good quality dental panoramic radiographs (DPT) of healthy Malay children aged 8-16 years (49 boys and 77 girls) were selected and measured. All radiographs were taken at Dental Specialist Clinic, Advanced Medical and Dental Institute, Universiti Sains Malaysia. The measurements were carried out using new Malay formula of both Demirjian and Cameriere methods by calibrated examiner. Results: The intraclass correlation coefficient (ICC) analysis between the chronological age with Demirjian and Cameriere has been calculated. The Demirjian method has shown a better percentage (91.4%) of ICC compared to Cameriere (89.2%) which also indicates a high association, with good reliability. However, by comparing between Demirjian and Cameriere, it can be concluded that Demirjian has a better reliability. Conclusion: Thus, the results suggested that, modified Demirjian method is more reliable than modified Cameriere method among the population in Kepala Batas region.
Kostandyan, Erik; Ma, Ke
2012-01-01
This paper investigates the lifetime of high power IGBTs (insulated gate bipolar transistors) used in large wind turbine applications. Since the IGBTs are critical components in a wind turbine power converter, it is of great importance to assess their reliability in the design phase of the turbin...
Zhongwei Deng
2016-06-01
Full Text Available In the field of state of charge (SOC estimation, the Kalman filter has been widely used for many years, although its performance strongly depends on the accuracy of the battery model as well as the noise covariance. The Kalman gain determines the confidence coefficient of the battery model by adjusting the weight of open circuit voltage (OCV correction, and has a strong correlation with the measurement noise covariance (R. In this paper, the online identification method is applied to acquire the real model parameters under different operation conditions. A criterion based on the OCV error is proposed to evaluate the reliability of online parameters. Besides, the equivalent circuit model produces an intrinsic model error which is dependent on the load current, and the property that a high battery current or a large current change induces a large model error can be observed. Based on the above prior knowledge, a fuzzy model is established to compensate the model error through updating R. Combining the positive strategy (i.e., online identification and negative strategy (i.e., fuzzy model, a more reliable and robust SOC estimation algorithm is proposed. The experiment results verify the proposed reliability criterion and SOC estimation method under various conditions for LiFePO4 batteries.
None
1980-12-01
The changes in generating capacity projected for 1980 to 1989 are summarized. Tabulated data provide summaries to the information on projected generating unit construction, retirements, and changes, in several different categories and groupings. The new generating units to be completed by the end of 1989 total 699, representing 259,490 megawatts. This total includes 10 wind power and one fuel cell installations totaling 48.5 MW to be completed by the end of 1989. There are 321 units totaling 13,222 MW to be retired. There are capacity changes due to upratings and deratings. Summary data are presented for: total requirement for electric energy generation for 1985; hydroelectric energy production for 1985; nuclear energy production for 1985; geothermal and other energy production for 1985; approximate non-fossil generation for 1985; range of fossil energy requirements for 1985; actual fossil energy sources 1974 to 1979; estimated range of fossil fuel requirements for 1985; coal capacity available in 1985; and computation of fuel use in 1985. Power plant capacity factors are presented. Extensive data on proposed generating capacity changes by individual units in the 9 Regional Electric Reliability Councils are presented.
Smati, A.; Younsi, K.; Zeraibi, N.; Zemmour, N. [Universite de Boumerdes, Faculte des Hydrocarbures, Dept. Transport et Equipement, Boumerdes (Algeria)
2003-07-01
LNG plants are characterized by their relatively low number in the world, diversity of processes involved, very high investment and operating costs. The fuel consumption of this type of facilities (about 15%) may double in given cases, when the frequency of untimely and volunteer shut downs is high. Then, the improvement of the reliability of the LNG chain in its overall will lead objectively to substantial decrease of energy costs. For reparable systems, availability is more often used as reliability indicator. In reliability point of view, the LNG chain must be assimilated to a unique complex system. However, modeling of complex systems, in reliability point of view or other, is always difficult in relation with the large dimensions of the space of phases. In this paper, a systemic approach is used to reduce the space of phases. A representation of subsystems by reliability diagrams permit a more easy calculation of probabilities associated with every phase. A bottom up technique allows the reconstitution of the global model of reliability of the chain. In an environment characterized by its weakness in statistical data, a Bayesian estimation approach is used to define the failure and repair rates of different equipments composing the LNG chain. Some results concerning Algerian LNG chairs Hassi R'mel-Skikda are furnished. (authors)
Estabrook, Ryne; Neale, Michael
2013-01-01
Factor score estimation is a controversial topic in psychometrics, and the estimation of factor scores from exploratory factor models has historically received a great deal of attention. However, both confirmatory factor models and the existence of missing data have generally been ignored in this debate. This article presents a simulation study…
Rae, Gordon
2008-11-01
Several authors have suggested that prior to conducting a confirmatory factor analysis it may be useful to group items into a smaller number of item 'parcels' or 'testlets'. The present paper mathematically shows that coefficient alpha based on these parcel scores will only exceed alpha based on the entire set of items if W, the ratio of the average covariance of items between parcels to the average covariance of items within parcels, is greater than unity. If W is less than unity, however, and errors of measurement are uncorrelated, then stratified alpha will be a better lower bound to the reliability of a measure than the other two coefficients. Stratified alpha are also equal to the true reliability of a test when items within parcels are essentially tau-equivalent if one assumes that errors of measurement are not correlated.
Le Pape, Olivier; Cognez, Noriane
2016-01-01
The juvenile pleuronectiforms need specific feeding and sheltering conditions in order to succeed in the critical period following their metamorphosis. This dependence to restricted nurseries grounds is the reason why movements are limited along this stage of flatfish development relative to the larval planktonic stage. However, a controversy remains about the home range of young-of-the-year coastal and estuarine-dependent flatfishes: both a limited home range and the capacity of considerable movements are alternatively reported. In the present meta-analysis based on a review of existing literature on pleuronectiforms, we gathered information about young-of-the-year flatfish movements, in order to better understand the scale of their dependence to local habitat (i.e., whether and at which scale they move between different habitats of a nursery area). For this meta-analysis, two different methods were retained to estimate the range of movements: the daily maximal distance of displacement and the minimal distance of segregation between distinct pools of flatfishes (contrasted patterns in natural tracers, growth, and fitness). We analysed patterns in daily movements and distances of segregation with respect to habitat features and to fish life history, accounting for discrepancies linked to methods of estimation. The scale of movements depends on both semi-passive tidal transport, linked to tidal amplitude, and the ability of individuals to move, which is related to body length of the group-0 flatfishes, but remains limited (few 100 s meters without tidal cyclic migration). These moderate movements lead to segregation among patches of juvenile fish at small scales in upstream-downstream estuarine gradients (5 km), and of moderate scales along the coastline (10 km). This meta-analytical approach allowed for the resolution of strong dependence of young of the year flatfishes upon local nursery habitats.
Itagaki, H. [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H.; Ito, S. [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M.
1996-12-31
Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.
Höing, Andrea; Quinten, Marcel C; Indrawati, Yohana Maria; Cheyne, Susan M; Waltert, Matthias
2013-02-01
Estimating population densities of key species is crucial for many conservation programs. Density estimates provide baseline data and enable monitoring of population size. Several different survey methods are available, and the choice of method depends on the species and study aims. Few studies have compared the accuracy and efficiency of different survey methods for large mammals, particularly for primates. Here we compare estimates of density and abundance of Kloss' gibbons (Hylobates klossii) using two of the most common survey methods: line transect distance sampling and triangulation. Line transect surveys (survey effort: 155.5 km) produced a total of 101 auditory and visual encounters and a density estimate of 5.5 gibbon clusters (groups or subgroups of primate social units)/km(2). Triangulation conducted from 12 listening posts during the same period revealed a similar density estimate of 5.0 clusters/km(2). Coefficients of variation of cluster density estimates were slightly higher from triangulation (0.24) than from line transects (0.17), resulting in a lack of precision in detecting changes in cluster densities of triangulation and triangulation method also may be appropriate.
Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard
2014-01-01
Wave energy power plants are expected to become one of the major future contribution to the sustainable electricity production. Optimal design of wave energy power plants is associated with modeling of physical, statistical, measurement and model uncertainties. This paper presents stochastic models....... The stochastic model for extreme value estimation covers annual extreme value distributions and the statistical uncertainty due to limited amount of available data. Furthermore, updating based on new available data is explained based on a Bayesian approach. The statistical uncertainties are estimated based...... on the Maximum-Likelihood method, and the extreme value estimation uses the peaks-over-threshold (POT) method. Two generic examples of reliability assessments for failure due to fatigue and extreme...
Andreas Berner; Tariq Saleem Alharbi; Eric Carlstrm; Amir Khorram-Manesh
2015-01-01
Objective: To develop a validated and generalized collaborative tool to be utilized by high reliability organizations in order to conduct common resource assessment before major events and mass gatherings.Methods:The Swedish resource and risk estimation guide was used as foundation for the development of the generalized collaborative tool, by three different expert groups, and then analyzed. Analysis of inter-rater reliability was conducted through simulated cases that showed weighted and unweight κ-statistics.Results:The results revealed a mean of unweight κ-value from the three cases of 0.44 and a mean accuracy of 61% of the tool.Conclusions:A better collaboration ability and more accurate resource assessment with acceptable reliability and validity were shown in this study to be used as a foundation for resource assessment before major events/mass-gathering in a simulated environment. However, the result also indicates the challenges of creating measurable values from simulated cases. A study on real events can provide higher reliability but needs, on the other hand, an already developed tool.
Jaffé, Rodolfo; Dietemann, Vincent; Allsopp, Mike H; Costa, Cecilia; Crewe, Robin M; Dall'olio, Raffaele; DE LA Rúa, Pilar; El-Niweiri, Mogbel A A; Fries, Ingemar; Kezic, Nikola; Meusel, Michael S; Paxton, Robert J; Shaibi, Taher; Stolle, Eckart; Moritz, Robin F A
2010-04-01
Although pollinator declines are a global biodiversity threat, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because it is biased by the activity of beekeepers. To fill this gap in pollinator decline censuses and to provide a broad picture of the current status of honeybees across their natural range, we used microsatellite genetic markers to estimate colony densities and genetic diversity at different locations in Europe, Africa, and central Asia that had different patterns of land use. Genetic diversity and colony densities were highest in South Africa and lowest in Northern Europe and were correlated with mean annual temperature. Confounding factors not related to climate, however, are also likely to influence genetic diversity and colony densities in honeybee populations. Land use showed a significantly negative influence over genetic diversity and the density of honeybee colonies over all sampling locations. In Europe honeybees sampled in nature reserves had genetic diversity and colony densities similar to those sampled in agricultural landscapes, which suggests that the former are not wild but may have come from managed hives. Other results also support this idea: putative wild bees were rare in our European samples, and the mean estimated density of honeybee colonies on the continent closely resembled the reported mean number of managed hives. Current densities of European honeybee populations are in the same range as those found in the adverse climatic conditions of the Kalahari and Saharan deserts, which suggests that beekeeping activities do not compensate for the loss of wild colonies. Our findings highlight the importance of reconsidering the conservation status of honeybees in Europe and of regarding beekeeping not only as a profitable business for producing honey, but also as an essential component of biodiversity conservation.
Aurich, Nathassia K; Alves Filho, José O; Marques da Silva, Ana M; Franco, Alexandre R
2015-01-01
With resting-state functional MRI (rs-fMRI) there are a variety of post-processing methods that can be used to quantify the human brain connectome. However, there is also a choice of which preprocessing steps will be used prior to calculating the functional connectivity of the brain. In this manuscript, we have tested seven different preprocessing schemes and assessed the reliability between and reproducibility within the various strategies by means of graph theoretical measures. Different preprocessing schemes were tested on a publicly available dataset, which includes rs-fMRI data of healthy controls. The brain was parcellated into 190 nodes and four graph theoretical (GT) measures were calculated; global efficiency (GEFF), characteristic path length (CPL), average clustering coefficient (ACC), and average local efficiency (ALE). Our findings indicate that results can significantly differ based on which preprocessing steps are selected. We also found dependence between motion and GT measurements in most preprocessing strategies. We conclude that by using censoring based on outliers within the functional time-series as a processing, results indicate an increase in reliability of GT measurements with a reduction of the dependency of head motion.
Blossfeld, Mathis
2015-01-01
In 2007, the Global Geodetic Observing System (GGOS) was installed as a full component of the International Association of Geodesy (IAG). One primary goal of GGOS is the integration of geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. Thereby, GGOS is based on the data and services of the IAG. Besides the combination of different geodetic techniques, also the common estimation of the station coordinates (TRF), Earth Orientation Parameters (EOP) and coefficients of the Earth's gravitational field (Stokes coefficients) is necessary in order to reach this goal. However, the combination of all geometric and gravimetric observation techniques is not yet fully realized. A major step towards the GGOS idea of parameter integration would be the understanding of the existing correlations between the above mentioned fundamental geodetic parameter groups. This topic is the major objective of this thesis. One possibility to study the interactions is the use of Satellite Laser Ranging (SLR) in an intertechnique combination with Global Navigation Satellite Systems (GNSS) and Very Long Baseline Interferometry (VLBI) or the intra-technique combination of multiple SLR-tracked satellites. SLR plays a key role in this thesis since it is the unique technique which is sensitive to all parameter groups and allows an integrated parameter estimation with very high accuracy. The present work is based on five first-author publications which are supplemented by four co-author publications. In this framework, for the first time an extensive discussion of a refined global Terrestrial Reference Frame (TRF) estimation procedure, the estimation of so-called Epoch Reference Frames (ERFs) is presented. In contrast to the conventional linear station motion model, the ERFs provide frequently estimated station coordinates and Earth Orientation Parameters (EOP) which allow to approximate not modeled non-linear station motions very accurately
Reliability Estimation for Rolling Bearings Based on Virtual Information%基于虚拟信息的滚动轴承可靠性估计
楼洪梁; 陈磊; 李兴林; 但召江; 陈炳顺
2015-01-01
为了提高轴承截尾时间点的可靠度估计的可信度与稳定性，在滚动轴承截尾试验中出现无失效数据时，提出了在每个截尾时间点的可靠度估计过程中引入前一个截尾时间点无失效样本的虚拟失效信息的可靠度计算方法。经实例分析证明，在不同超参数取值下，采用该方法得到的特征寿命和形状参数估计值波动最小，具有更好的稳定性。%In order to improve credibility and stability of reliability estimation for censored time point of bearings,when the zero -failure data appeared in rolling bearing censored test,the virtual failure information of zero -failure sample in previous censored time point is introduced during reliability estimation process for each censored time point.The exam-ple analysis proof that the estimation value of characteristics life and shape parameter has the smallest fluctuation under different hyper parameters,and the method is better than other methods in stability.
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.
Liu, Yan; Wu, Amery D.; Zumbo, Bruno D.
2010-01-01
In a recent Monte Carlo simulation study, Liu and Zumbo showed that outliers can severely inflate the estimates of Cronbach's coefficient alpha for continuous item response data--visual analogue response format. Little, however, is known about the effect of outliers for ordinal item response data--also commonly referred to as Likert, Likert-type,…
Liu, Yan; Wu, Amery D.; Zumbo, Bruno D.
2010-01-01
In a recent Monte Carlo simulation study, Liu and Zumbo showed that outliers can severely inflate the estimates of Cronbach's coefficient alpha for continuous item response data--visual analogue response format. Little, however, is known about the effect of outliers for ordinal item response data--also commonly referred to as Likert, Likert-type,…
Wang, Cong-Zhi; Li, Tian-Jie; Zheng, Yong-Ping
2014-01-01
Elderly people often suffer from sarcopenia in their lower extremities, which gives rise to the increased susceptibility of fall. Comparing the mechanical properties of the knee extensor/flexors on elderly and young subjects is helpful in understanding the underlying mechanisms of the muscle aging process. However, although the stiffness of skeletal muscle has been proved to be positively correlated to its non-fatiguing contraction intensity by some existing methods, this conclusion has not been verified above 50% maximum voluntary contraction (MVC) due to the limitation of their measurement range. In this study, a vibro-ultrasound system was set up to achieve a considerably larger measurement range on muscle stiffness estimation. Its feasibility was verified on self-made silicone phantoms by comparing with the mechanical indentation method. The system was then used to assess the stiffness of vastus intermedius (VI), one of the knee extensors, on 10 healthy elderly female subjects (56.7±4.9 yr) and 10 healthy young female subjects (27.6±5.0 yr). The VI stiffness in its action direction was confirmed to be positively correlated to the % MVC level (R2 = 0.999) over the entire range of isometric contraction, i.e. from 0% MVC (relaxed state) to 100% MVC. Furthermore, it was shown that there was no significant difference between the mean VI shear modulus of the elderly and young subjects in a relaxed state (p>0.1). However, when performing step isometric contraction, the VI stiffness of young female subjects was found to be larger than that of elderly participants (pmuscle and its relationship with intensity of active contraction. Furthermore, the vibro-ultrasound system has a potential to become a powerful tool for investigating the elderly’s muscle diseases. PMID:24991890
Cong-Zhi Wang
Full Text Available Elderly people often suffer from sarcopenia in their lower extremities, which gives rise to the increased susceptibility of fall. Comparing the mechanical properties of the knee extensor/flexors on elderly and young subjects is helpful in understanding the underlying mechanisms of the muscle aging process. However, although the stiffness of skeletal muscle has been proved to be positively correlated to its non-fatiguing contraction intensity by some existing methods, this conclusion has not been verified above 50% maximum voluntary contraction (MVC due to the limitation of their measurement range. In this study, a vibro-ultrasound system was set up to achieve a considerably larger measurement range on muscle stiffness estimation. Its feasibility was verified on self-made silicone phantoms by comparing with the mechanical indentation method. The system was then used to assess the stiffness of vastus intermedius (VI, one of the knee extensors, on 10 healthy elderly female subjects (56.7 ± 4.9 yr and 10 healthy young female subjects (27.6 ± 5.0 yr. The VI stiffness in its action direction was confirmed to be positively correlated to the % MVC level (R2 = 0.999 over the entire range of isometric contraction, i.e. from 0% MVC (relaxed state to 100% MVC. Furthermore, it was shown that there was no significant difference between the mean VI shear modulus of the elderly and young subjects in a relaxed state (p > 0.1. However, when performing step isometric contraction, the VI stiffness of young female subjects was found to be larger than that of elderly participants (p < 0.001, especially at the relatively higher contraction levels. The results expanded our knowledge on the mechanical property of the elderly's skeletal muscle and its relationship with intensity of active contraction. Furthermore, the vibro-ultrasound system has a potential to become a powerful tool for investigating the elderly's muscle diseases.
J. Piątkowski
2012-12-01
Full Text Available Purpose: The main purpose of the study was to determine methodology for estimation of the operational reliability based on the statistical results of abrasive wear testing.Design/methodology/approach: For research, a traditional tribological system, i.e. a friction pair of the AlSi17CuNiMg silumin in contact with the spheroidal graphite cast iron of EN-GJN-200 grade, was chosen. Conditions of dry friction were assumed. This system was chosen based on mechanical cooperation between the cylinder (silumin and piston rings (spheroidal graphite cast iron in conventional internal combustion piston engines with spark ignition.Findings: Using material parameters of the cylinder and piston rings, nominal losses qualifying the cylinder for repair and the maximum weight losses that can be smothered were determined. Based on the theoretical number of engine revolutions to repair and stress acting on the cylinder bearing surface, the maximum distance that the motor vehicle can travel before the seizure of the cylinder occurs was calculated. These results were the basis for statistical analysis carried out with the Weibull modulus, the end result of which was the estimation of material reliability (the survival probability of tribological system and the determination of a pre-operation warranty period of the tribological system.Research limitations/implications: The analysis of Weibull distribution modulus will estimate the reliability of a tribological cylinder-ring system enabled the determination of an approximate theoretical time of the combustion engine failure-free running.Originality/value: The results are valuable statistical data and methodology proposed in this paper can be used to determine a theoretical life time of the combustion engine.
Shojaei Saadi, Habib A; Vigneault, Christian; Sargolzaei, Mehdi; Gagné, Dominic; Fournier, Éric; de Montera, Béatrice; Chesnais, Jacques; Blondin, Patrick; Robert, Claude
2014-10-12
Genome-wide profiling of single-nucleotide polymorphisms is receiving increasing attention as a method of pre-implantation genetic diagnosis in humans and of commercial genotyping of pre-transfer embryos in cattle. However, the very small quantity of genomic DNA in biopsy material from early embryos poses daunting technical challenges. A reliable whole-genome amplification (WGA) procedure would greatly facilitate the procedure. Several PCR-based and non-PCR based WGA technologies, namely multiple displacement amplification, quasi-random primed library synthesis followed by PCR, ligation-mediated PCR, and single-primer isothermal amplification were tested in combination with different DNA extractions protocols for various quantities of genomic DNA inputs. The efficiency of each method was evaluated by comparing the genotypes obtained from 15 cultured cells (representative of an embryonic biopsy) to unamplified reference gDNA. The gDNA input, gDNA extraction method and amplification technology were all found to be critical for successful genome-wide genotyping. The selected WGA platform was then tested on embryo biopsies (n = 226), comparing their results to that of biopsies collected after birth. Although WGA inevitably leads to a random loss of information and to the introduction of erroneous genotypes, following genomic imputation the resulting genetic index of both sources of DNA were highly correlated (r = 0.99, PDNA in sufficient quantities for successful genome-wide genotyping starting from an early embryo biopsy. However, imputation from parental and population genotypes is a requirement for completing and correcting genotypic data. Judicious selection of the WGA platform, careful handling of the samples and genomic imputation together, make it possible to perform extremely reliable genomic evaluations for pre-transfer embryos.
Ultrasonic infrared composite range system based on Bayesian estimation%基于贝叶斯估计的超声红外复合测距系统
徐兵; 姜艳青; 周志杰; 张玉玲; 张邦成
2013-01-01
针对单一传感器在复杂环境中测量数据可靠性差以及同一类型多传感器测距盲区无法补偿的问题,提出了一种基于贝叶斯估计的不同类型多传感器复合测距方法.该方法综合了超声与红外2种传感器的优点,利用红外传感器对超声传感器的测距盲区进行补偿,解决了单一类型传感器存在的测距盲区问题.对同一类型多个传感器同时获取的数据动态建立置信距离矩阵,应用椭圆曲线表示的支持程度关系矩阵确定各传感器测量结果中的有效值,并对其应用贝叶斯估计的方法进行数据融合,使测距准确度进一步提高.基于超声红外复合测距移动机器人的仿真实验表明,采用贝叶斯估计方法在保证实时性的前提下提高了测距的准确度,误差为±1 mm,达到了设计要求,为移动机器人的研究提供了参考.%Due to the fact that the data of single sensor measurement has poor reliability in complex environments,and multi-sensor of same type has ranging blind which can not be compensated,a composite distance measurement method with different types of multi-sensor was proposed based on Bayesian estimation.The proposed method combined the advantages of ultrasonic sensors and infrared sensors,used the infrared sensors to compensate the distance blind spots of the ultrasonic sensors,so that the issues of the ranging blind of single type sensor was solved.After establishing a dynamic confidence from matrix for the data obtained simultaneously from the multi-sensor of same type,the valid values were determined with the application of relationship matrix of the elliptic curve,which expresses supporting degree.Then the valid values were integrated by Bayesian estimation methods,which further improved the accuracy of the ranging.Measurement experiments were done through the developed ultrasonic infrared composite measure ranging mobile robot.Simulation and experimental results show that at the premise
El-Minshawy Osama
2010-01-01
Full Text Available Glomerular Filtration Rate (GFR is considered the best overall index of renal function currently used. Measurement of 24 hours urine/plasma creatinine ratio (UV/P is usually used for estimation of GFR. However little is known about its accuracy in different stages of Chronic Kidney Disease (CKD aim: is to evaluate performance of UV/P in classification of CKD by comparing it with isotopic GFR (iGFR. 136 patients with CKD were enrolled in this study 80 (59% were males, 48 (35% were diabetics. Mean age 46 ± 13. Creatinine Clearance (Cr.Cl estimated by UV/P and Cockroft-Gault (CG was done for all patients, iGFR was the reference value. Accuracy of UV/P was 10%, 31%, 49% within ± 10%, ± 30%, ± 50% error respectively, r 2 = 0.44. CG gave a better performance even when we restrict our analysis to diabetics only, the accuracy of CG was 19%, 47%, 72% in ± 10%, ± 30% and ± 50% errors respectively, r 2 = 0.63. Both equations gave poor classification of CKD. In conclusion, UV/P has poor accuracy in estimation of GFR, The accuracy worsened as kidney disease becomes more severe. We conclude 24 hours CrCl. is not good substitute for measurement of GFR in patients with CKD.
Michael O. Harris-Love
2016-02-01
Full Text Available Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT and the Free Hand Tool (FHT are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years. Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs and the standard error of the measurement (SEM. Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R2. Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97–.99, p < .001. Mean differences between the echogenicity estimates obtained with the RMT and FHT methods was .87 grayscale levels (95% CI [.54–1.21], p < .0001 using data obtained with both programs. The SEM for Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection
Reliability of Circumplex Axes
Micha Strack
2013-06-01
Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.
Hinrichs, Ruth; Frank, Paulo Ricardo Ost; Vasconcellos, M A Z
2017-03-01
Modifications of cotton and polyester textiles due to shots fired at short range were analyzed with a variable pressure scanning electron microscope (VP-SEM). Different mechanisms of fiber rupture as a function of fiber type and shooting distance were detected, namely fusing, melting, scorching, and mechanical breakage. To estimate the firing distance, the approximately exponential decay of GSR coverage as a function of radial distance from the entrance hole was determined from image analysis, instead of relying on chemical analysis with EDX, which is problematic in the VP-SEM. A set of backscattered electron images, with sufficient magnification to discriminate micrometer wide GSR particles, was acquired at different radial distances from the entrance hole. The atomic number contrast between the GSR particles and the organic fibers allowed to find a robust procedure to segment the micrographs into binary images, in which the white pixel count was attributed to GSR coverage. The decrease of the white pixel count followed an exponential decay, and it was found that the reciprocal of the decay constant, obtained from the least-square fitting of the coverage data, showed a linear dependence on the shooting distance.
N. Poornima
2013-01-01
Full Text Available This work projects photoluminescence (PL as an alternative technique to estimate the order of resistivity of zinc oxide (ZnO thin films. ZnO thin films, deposited using chemical spray pyrolysis (CSP by varying the deposition parameters like solvent, spray rate, pH of precursor, and so forth, have been used for this study. Variation in the deposition conditions has tremendous impact on the luminescence properties as well as resistivity. Two emissions could be recorded for all samples—the near band edge emission (NBE at 380 nm and the deep level emission (DLE at ~500 nm which are competing in nature. It is observed that the ratio of intensities of DLE to NBE (/ can be reduced by controlling oxygen incorporation in the sample. - measurements indicate that restricting oxygen incorporation reduces resistivity considerably. Variation of / and resistivity for samples prepared under different deposition conditions is similar in nature. / was always less than resistivity by an order for all samples. Thus from PL measurements alone, the order of resistivity of the samples can be estimated.
Aditya Shekhar
2016-01-01
Full Text Available The economic viability of on-road wireless charging of electric vehicles (EVs strongly depends on the choice of the inductive power transfer (IPT system configuration (static or dynamic charging, charging power level and the percentage of road coverage of dynamic charging. In this paper, a case study is carried out to determine the expected investment costs involved in installing the on-road charging infrastructure for an electric bus fleet. Firstly, a generic methodology is described to determine the driving range of any EV (including electric buses with any gross mass and frontal area. A dynamic power consumption model is developed for the EV, taking into account the rolling friction, acceleration, deceleration, aerodynamic drag, regenerative braking and Li-ion battery behavior. Based on the simulation results, the linear dependence of the battery state of charge (SoC on the distance traveled is proven. Further, the impact of different IPT system parameters on driving range is incorporated. Economic implications of a combination of different IPT system parameters are explored for achieving the required driving range of 400 km, and the cost optimized solution is presented for the case study of an electric bus fleet. It is shown that the choice of charging power level and road coverage are interrelated in the economic context. The economic viability of reducing the capacity of the on-board battery as a trade-off between higher transport efficiency and larger on-road charging infrastructure is presented. Finally, important considerations, like the number of average running buses, scheduled stoppage time and on-board battery size, that make on-road charging an attractive option are explored. The cost break-up of various system components of the on-road charging scheme is estimated, and the final project cost and parameters are summarized. The specific cost of the wireless on-road charging system is found to be more expensive than the conventional
Harris-Love, Michael O; Seamon, Bryant A; Teixeira, Carla; Ismail, Catheeja
2016-01-01
Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT) and the Free Hand Tool (FHT) are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI) within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years). Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs) and the standard error of the measurement (SEM). Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R (2)). Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97-.99, p ImageJ. Uniform coefficients of determination (R (2) = .96-.99, p ImageJ are suitable for the post-acquisition image analysis of tissue echogenicity in older adults.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-06-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence.
Mosbrucker, Adam; Spicer, Kurt R.; Christianson, Tami; Uhrich, Mark A.
2015-01-01
data range among sensors. Of greatest interest to many programs is a hysteresis in the relationship between turbidity and SSC, attributed to temporal variation of particle size distribution (Landers and Sturm, 2013; Uhrich et al., 2014). This phenomenon causes increased uncertainty in regression-estimated values of SSC, due to changes in nephelometric reflectance off the varying grain sizes in suspension (Uhrich et al., 2014). Here, we assess the feasibility and application of close-range remote sensing to quantify SSC and particle size distribution of a disturbed, and highly-turbid, river system. We use a consumer-grade digital camera to acquire imagery of the river surface and a depth-integrating sampler to collect concurrent suspended-sediment samples. We then develop two empirical linear regression models to relate image spectral information to concentrations of fine sediment (clay to silt) and total suspended sediment. Before presenting our regression model development, we briefly summarize each data-acquisition method.
DeVries, R. J.; Hann, D. A.; Schramm, H.L.
2015-01-01
This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.
Al-Musawi Safaa Ismael
2016-01-01
Full Text Available The calculating of reached ageing based on the history of loading according of International Electrotechnical Commission standard algorithm is the first task. In order to verify the obtained results, measurements of polymerization index were made on 28 paper samples taken directly from low voltage terminals (winding ends and bus connections of the transformer under test rated 380 MVA, 2´15,75 kV/420 kV. The complete procedure of paper sample locations and taking off is described, thereby providing a manner of how this should be done, determined by specific conditions of the transformer under test. Furthermore, the determination of limit viscosity and using its relationship function with polymerization index are explained together. Comparison is made with those of liquid chromatography of oil. The results of particle sort and size analysis are shown. Finally, an estimation of the transformer life remainder is made, which is of paramount importance when defining the steps that have to be made either in revitalization process or in transformer replacement planning.
Gogoi, M.M.; Krishna Moorthy, K. [SPL, VSSC, Trivandrum (India); Bhuyan, P.K. [Dibrugarh Univ. (India). Dept. of Physics
2008-07-01
Spectral aerosol optical depth (AOD) at ten discrete channels in the visible and near IR regions were estimated over Dibrugarh, located in the northeastern part of India, using a ground-based multi-wavelength solar radiometer (MWR) from October 2001 to February 2006. The observations reveal seasonal variations with low values of AODs in retreating monsoon and high values in the pre-monsoon season. Generally the AODs are high at shorter wavelengths and low at longer wavelengths. AOD spectra are relatively steep in winter compared to that in the monsoon period. The average value of AOD lies between 0.44{+-}0.07 and 0.56{+-}0.07 at 500 nm during the pre-monsoon season and between 0.19{+-}0.02 and 0.22{+-}0.02 during re-treating monsoon at the same wavelength. Comparison of MWR observation on Dibrugarh with satellite (MODIS) observation indicates a good correspondence between ground-based and satellite derived AODs. The synoptic wind pattern obtained from National Centre for Medium Range Weather Forecasting (NCMRWF), India and back trajectory analysis using the NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT4) Model indicates that maximum contribution to aerosol extinction could be due to transport of pollutants from the industrialized and urban regions of India and large amounts of desert and mineral aerosols from the west Asian and Indian desert. Equal contributions from Bay-of-Bengal (BoB), in addition to that from the Indian landmass and west Asian desert leads to a further increase of AOD over the region of interest in the pre-monsoon seasons. (orig.)
M. M. Gogoi
2008-06-01
Full Text Available Spectral aerosol optical depth (AOD at ten discrete channels in the visible and near IR regions were estimated over Dibrugarh, located in the northeastern part of India, using a ground-based multi-wavelength solar radiometer (MWR from October 2001 to February 2006. The observations reveal seasonal variations with low values of AODs in retreating monsoon and high values in the pre-monsoon season. Generally the AODs are high at shorter wavelengths and low at longer wavelengths. AOD spectra are relatively steep in winter compared to that in the monsoon period. The average value of AOD lies between 0.44±0.07 and 0.56±0.07 at 500 nm during the pre-monsoon season and between 0.19±0.02 and 0.22±0.02 during re-treating monsoon at the same wavelength. Comparison of MWR observation on Dibrugarh with satellite (MODIS observation indicates a good correspondence between ground-based and satellite derived AODs. The synoptic wind pattern obtained from National Centre for Medium Range Weather Forecasting (NCMRWF, India and back trajectory analysis using the NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT4 Model indicates that maximum contribution to aerosol extinction could be due to transport of pollutants from the industrialized and urban regions of India and large amounts of desert and mineral aerosols from the west Asian and Indian desert. Equal contributions from Bay-of-Bengal (BoB, in addition to that from the Indian landmass and west Asian desert leads to a further increase of AOD over the region of interest in the pre-monsoon seasons.
Bennett, James C.; Wang, Q. J.; Li, Ming; Robertson, David E.; Schepen, Andrew
2016-10-01
We present a new streamflow forecasting system called forecast guided stochastic scenarios (FoGSS). FoGSS makes use of ensemble seasonal precipitation forecasts from a coupled ocean-atmosphere general circulation model (CGCM). The CGCM forecasts are post-processed with the method of calibration, bridging and merging (CBaM) to produce ensemble precipitation forecasts over river catchments. CBaM corrects biases and removes noise from the CGCM forecasts, and produces highly reliable ensemble precipitation forecasts. The post-processed CGCM forecasts are used to force the Wapaba monthly rainfall-runoff model. Uncertainty in the hydrological modeling is accounted for with a three-stage error model. Stage 1 applies the log-sinh transformation to normalize residuals and homogenize their variance; Stage 2 applies a conditional bias-correction to correct biases and help remove negative forecast skill; Stage 3 applies an autoregressive model to improve forecast accuracy at short lead-times and propagate uncertainty through the forecast. FoGSS generates ensemble forecasts in the form of time series for the coming 12 months. In a case study of two catchments, FoGSS produces reliable forecasts at all lead-times. Forecast skill with respect to climatology is evident to lead-times of about 3 months. At longer lead-times, forecast skill approximates that of climatology forecasts; that is, forecasts become like stochastic scenarios. Because forecast skill is virtually never negative at long lead-times, forecasts of accumulated volumes can be skillful. Forecasts of accumulated 12 month streamflow volumes are significantly skillful in several instances, and ensembles of accumulated volumes are reliable. We conclude that FoGSS forecasts could be highly useful to water managers.
Ellwood, Brooks B.
1982-07-01
Flow directions are estimated from the measurement of the magnetic fabric of 106 samples, collected at 18 sites in four welded tuff units in the central San Juan Mountains of southern Colorado. The estimates assume that the tuffs generally flowed directly away from the extrusive vents and that the lineations of magnetic grains within the tuffs represent the flow direction at individual sites. Errors in the estimation may arise from topographic variation, rheomorphism (post-emplacement mass flow) within the tuff, and other factors. Magnetic lineation is defined as the site mean anisotropy of magnetic susceptibility maximum azimuth. A test on the flow directions for individual units is based on the projection of lineation azimuths and their intersection within or near the known source caldera for the tuff. This test is positive for the four units examined. Paleomagnetic results for these tuffs are probably reliable indicators of the geomagnetic field direction in southwest Colorado, during the time (28.2-26.5 Ma) of emplacement.
Wishart, Justin Rory
2011-01-01
In this paper, a lower bound is determined in the minimax sense for change point estimators of the first derivative of a regression function in the fractional white noise model. Similar minimax results presented previously in the area focus on change points in the derivatives of a regression function in the white noise model or consider estimation of the regression function in the presence of correlated errors.
TC Chaves
2008-08-01
Full Text Available OBJETIVO: Determinar a confiabilidade intra e interexaminadores e correlacionar os valores de amplitudes de movimentos (ADM cervical obtidas por fleximetria e goniometria em crianças. MÉTODOS: Participaram deste estudo 106 crianças saudáveis, 49 meninos (8,91±2,09 anos e 57 meninas (9,14±1,46 anos, com idades entre seis e 14 anos, assintomáticas para disfunção cervical. Dois examinadores previamente treinados e dois auxiliares avaliaram a ADM cervical. Os examinadores coletaram as medidas por fleximetria e goniometria (confiabilidade interexaminadores e repetiram as avaliações, após uma semana (confiabilidade intra-examinador. Todas as medidas foram registradas três vezes por cada examinador e o valor médio foi considerado para análise estatística. O coeficiente de correlação intraclasse (ICC 2,1 e 2,2 foi utilizado para verificação das confiabilidades e o coeficiente de correlação de Pearson (pOBJECTIVE: To determine the intra and interrater reliability of fleximetry and goniometry in children and correlate the cervical spine range of motion (ROM values obtained from these methods. METHODS: One hundred six children participated in this study: 49 males (8.91±2.09 years and 57 females (9.14±1.46 years. Their ages ranged from six to 14 years and symptom-free to cervical dysfunction. Two previously trained raters and two assistants assessed neck ROM. The measurements were made using fleximetry and goniometry (interrater reliability and repeated them one week later (intrarater reliability. All measurements were made three times by each rater and the mean value was used for statistical analysis. Intraclass correlation coefficients (ICC 2.1 and 2.2 were used to investigate reliability and Pearson's correlation coefficient (p<0.05 was used to investigate the correlation between measurements obtained from the two techniques. RESULTS: Moderate and excellent levels for intrarater reliability were observed for fleximetry and
Fei Feng
2014-05-01
Full Text Available Ambient temperature is a significant factor that influences the characteristics of lithium-ion batteries, which can produce adverse effects on state of charge (SOC estimation. In this paper, an integrated SOC algorithm that combines an advanced ampere-hour counting (Adv Ah method and multistate open-circuit voltage (multi OCV method, denoted as “Adv Ah + multi OCV”, is proposed. Ah counting is a simple and general method for estimating SOC. However, the available capacity and coulombic efficiency in this method are influenced by the operating states of batteries, such as temperature and current, thereby causing SOC estimation errors. To address this problem, an enhanced Ah counting method that can alter the available capacity and coulombic efficiency according to temperature is proposed during the SOC calculation. Moreover, the battery SOCs between different temperatures can be mutually converted in accordance with the capacity loss. To compensate for the accumulating errors in Ah counting caused by the low precision of current sensors and lack of accurate initial SOC, the OCV method is used for calibration and as a complement. Given the variation of available capacities at different temperatures, rated/non-rated OCV–SOCs are established to estimate the initial SOCs in accordance with the Ah counting SOCs. Two dynamic tests, namely, constant- and alternated-temperature tests, are employed to verify the combined method at different temperatures. The results indicate that our method can provide effective and accurate SOC estimation at different ambient temperatures.
Bhargava, Kapilesh, E-mail: kapilesh_66@yahoo.co.u [Architecture and Civil Engineering Division, Bhabha Atomic Research Center, Trombay, Mumbai 400 085 (India); Mori, Yasuhiro [Graduate School of Environmental Studies, Nagoya University, Nagoya 464-8603 (Japan); Ghosh, A.K. [Reactor Safety Division, Bhabha Atomic Research Center, Trombay, Mumbai 400 085 (India)
2011-05-15
Research highlights: Predictive models for corrosion-induced damages in RC structures. Formulations for time-dependent flexural and shear strengths of corroded RC beams. Methodology for mean and c.o.v. for time-dependent strengths of corroded RC beams. Simple estimation of mean and c.o.v. for flexural strength with loss of bond. - Abstract: The structural deterioration of reinforced concrete (RC) structures due to reinforcement corrosion is a major worldwide problem. Damages to RC structures due to reinforcement corrosion manifest in the form of expansion, cracking and eventual spalling of the cover concrete; thereby resulting in serviceability and durability degradation of such structures. In addition to loss of cover, RC structure may suffer structural damages due to loss of reinforcement cross-sectional area, and loss of bond between corroded reinforcement and surrounding cracked concrete, sometimes to the extent that the structural failure becomes inevitable. This paper forms the first part of a study which addresses time-dependent reliability analyses of RC beams affected by reinforcement corrosion. In this paper initially the predictive models are presented for the quantitative assessment of time-dependent damages in RC beams, recognized as loss of mass and cross-sectional area of reinforcing bar, loss of concrete section owing to the peeling of cover concrete, and loss of bond between corroded reinforcement and surrounding cracked concrete. Then these models have been used to present analytical formulations for evaluating time-dependent flexural and shear strengths of corroded RC beams, based on the standard composite mechanics expressions for RC sections. Further by considering variability in the identified basic variables that could affect the time-dependent strengths of corrosion-affected RC beams, the estimation of statistical descriptions for the time-dependent strengths is presented for a typical simply supported RC beam. The statistical descriptions
Software Reliability: Estimation and Prediction
1992-12-31
5 COMPUTE 0 2 6 9 15 23 DATAVAL 0 3 6 11 16 23 INIT 0 1 3 5 9 15 IU4IRE 0 1 3 4 6 8 INTERI 0 4 1 2 2 3 LOGIC 0 4 10 18 30 50 TOTAL 0 15 29 50 78 122...34 dataval ", "init- "intere", "interi" and "logic" are distinguished in the ERBS project acceptance phase, for example. The number of failures of each type
Reliable Function Approximation and Estimation
2016-08-16
geometric mean inequality for products of three matrices. A Israel, F Krahmer, and R Ward. Linear Algebra and its Applications 488, 2016. 1-12. (O3...standard compressed sensing theory is valid only for a restrictive set of dictionaries, limiting the scope of applications . In this award, the PI developed...low-order interactions. The weighted sparsity model allows for more freedom than linear regression but provides sufficient structure to extend
Jasbir Arora
2016-06-01
Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.
2013-06-01
addition of an active illumination designator to perform ranging to a target. In addition to tactical sensors, a study produced by Forecast International...obtained by first executing the multi-surface ranging algorithm on an image ensemble consisting of an average of 30 individual 3-D images across a...FLASH LADAR have recently garnered a significant amount of interest for defense and civilian applications. Due to the employment of active flood
Whitaker, Simon; Shirley, Gordon
2010-01-01
This paper examines how far it is valid to generate a profile of an individual’s cognitive abilities using the WISC-IV or WAIS-III for individuals in the low ability range. Data are presented which demonstrate that the WISC-IV and WAIS-III assessments produce different cognitive profiles, when given to the same 16-year-olds who receive special education. It is suggested that at the low IQ level, subtest and index scores may lack sufficient stability for the WISC-IV or WAIS-III to produce reli...
Lvov, A. V.; Metelev, S. L.
2016-11-01
We propose simulation models for estimating the interference immunity of radio reception using the spatial processing of signals in the airborne and ground-based communication channels of the meter and decimeter wavelength ranges. The ultimate achievable interference immunity under various radio-wave propagation conditions is studied.
Oborin, V.; Bannikov, M.; Naimark, O.; Froustey, C.
2011-03-01
The role of the collective behavior of defect ensembles in prestrained samples of an Al-Cu alloy was studied under fatigue testing conditions (preset load level) that corresponded to thte basic fatigue life of the given material (about 2 × 105 cycles). The surface relief of deformed samples was examined in a NewView interferometer profilometer so as to reveal the scaling-invariant laws of defect-related structure evolution.
Bae, Kyung Oh; Kim, Dae Woong; Shin, Hyung Seop [Andong National Univ., Andong (Korea, Republic of); Park, Lee Ju; Kim, Hyung Won [Agency for Defense Development, Daejeon (Korea, Republic of)
2016-06-15
Studies on the deformation behavior of materials subjected to impact loads have been carried out in various fields of engineering and industry. The deformation and fracture of members for these machines/structures are known to correspond to the intermediate strain-rate region. Therefore, for the structural design, it is necessary to consider the dynamic deformation behavior in these intermediate strain-rate ranges. However, there have been few reports with useful data about the deformation and fracture behavior at intermediate strain-rate ranges. Because the intermediate strain-rate region is located between quasi-static and high strain-rate regions, it is difficult to obtain the intermediate strain-rate using conventional reasonable test equipment. To solve this problem, in this study, the measurement reliability of the constructed drop-bar impact tensile test apparatus was established and the dynamic behavior at the intermediate strain-rate range of carbon steels was evaluated by utilizing the apparatus.
Hartzell, Allyson L; Shea, Herbert R
2010-01-01
This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.
Nishiyama, Takanori; Nakamura, Takuji; Tsutsumi, Masaki; Tanaka, Yoshi; Nishimura, Koji; Sato, Kaoru; Tomikawa, Yoshihiro; Kohma, Masashi
2016-07-01
Polar Mesosphere Winter Echo (PMWE) is known as back scatter echo from 55 to 85 km in the mesosphere, and it has been observed by MST and IS radar in polar region during non-summer period. Since density of free electrons as scatterer is low in the dark mesosphere during winter, it is suggested that PMWE requires strong ionization of neutral atmosphere associated with Energetic Particles Precipitations (EPPs) during Solar Proton Events [Kirkwood et al., 2002] or during geomagnetically disturbed periods [Nishiyama et al., 2015]. However, studies on relationship between occurrence of PMWE and background electron density has been limited yet [Lübken et al., 2006], partly because the PMWE occurrence rate is known to be quite low (2.9%) [Zeller et al., 2006]. The PANSY (Program of the Antarctic Syowa MST/IS) radar, which is the largest MST radar in Antarctica, observed many PMWE events since it has started mesosphere observations in June 2012. We established an application method of the PANSY radar as riometer, which makes it possible to estimate Cosmic Noise Absorptions (CNA) as proxy of relative variations on background electron density. In addition, electron density profiles from 60 to 150 km altitude are calculated by Ionospheric Model for the Auroral Zone (IMAZ) [McKinnell and Friedrich, 2007] and CNA estimated by the PANSY radar. In this presentation, we would like to focus on strong PMWE during two big geomagnetic storm events, St. Patrick's Day and the Summer Solstice 2015 Event, in order to compare observed PMWE characteristics to model background electron density. On March 19 and 22, recovery phase of St. Patrick's Day Storm, sudden PMWE intensification was detected near 60 km by the PANSY radar. At the same time, strong Cosmic Noise Absorptions (CNA) of 0.8 dB and 1.0 dB were measured, respectively. However, calculated electron density profiles did not show high electron density at the altitude where the PMWE intensification were observed. On June 22, the
Marín-Moreno, Héctor; Minshull, Timothy A.; Westbrook, Graham K.; Sinha, Bablu
2015-05-01
Methane hydrate close to the hydrate stability limit in seafloor sediment could represent an important source of methane to the oceans and atmosphere as the oceans warm. We investigate the extent to which patterns of past and future ocean-temperature fluctuations influence hydrate stability in a region offshore West Svalbard where active gas venting has been observed. We model the transient behavior of the gas hydrate stability zone at 400-500 m water depth (mwd) in response to past temperature changes inferred from historical measurements and proxy data and we model future changes predicted by seven climate models and two climate-forcing scenarios (Representative Concentration Pathways RCPs 2.6 and 8.5). We show that over the past 2000 year, a combination of annual and decadal temperature fluctuations could have triggered multiple hydrate-sourced methane emissions from seabed shallower than 400 mwd during episodes when the multidecadal average temperature was similar to that over the last century (˜2.6°C). These temperature fluctuations can explain current methane emissions at 400 mwd, but decades to centuries of ocean warming are required to generate emissions in water deeper than 420 m. In the venting area, future methane emissions are relatively insensitive to the choice of climate model and RCP scenario until 2050 year, but are more sensitive to the RCP scenario after 2050 year. By 2100 CE, we estimate an ocean uptake of 97-1050 TgC from marine Arctic hydrate-sourced methane emissions, which is 0.06-0.67% of the ocean uptake from anthropogenic CO2 emissions for the period 1750-2011.
Bendell, A
1986-01-01
Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo
Ipsen, Andreas; Ebbels, Timothy M D
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time-tasks that may be best addressed through engineering efforts.
Ipsen, Andreas; Ebbels, Timothy M. D.
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.
R. Uemura
2012-01-01
Full Text Available A single isotope ratio (δD or δ^{18}O of water is widely used as an air-temperature proxy in Antarctic ice cores. These isotope ratios, however, do not solely depend on air-temperature but also on the extent of distillation of heavy isotopes out of atmospheric water vapor from an oceanic moisture source to a precipitation site. The temperature changes at the oceanic moisture source (ΔT_{source} and at the precipitation site (ΔT_{site} can be retrieved by using deuterium-excess (d data. A new d record from Dome Fuji, Antarctica is produced spanning the past 360 000 yr and compared with records from Vostok and EPICA Dome C ice cores. To retrieve ΔT_{source} and ΔT_{site} information, different linear regression equations have been proposed using theoretical isotope distillation models. A major source of uncertainty lies in the coefficient of regression, β_{site} which is related to the sensitivity of d to ΔT_{site}. We show that different ranges of temperature and selections of isotopic model outputs may increase the value of β_{site} by a factor of two. To explore the impacts of this coefficient on the reconstructed temperatures, we apply for the first time the exact same methodology to the isotope records from the three Antarctica ice cores. We show that uncertainties in the β_{site} coefficient strongly affect (i the glacial-interglacial magnitude of ΔT_{source}; (ii the imprint of obliquity in ΔT_{source} and in the site-source temperature gradient. By contrast, we highlight the robustness of ΔT_{site} reconstruction using water isotopes records.
Circuit design for reliability
Cao, Yu; Wirth, Gilson
2015-01-01
This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units. The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.
The rating reliability calculator
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
Photovoltaic system reliability
Maish, A.B.; Atcitty, C. [Sandia National Labs., NM (United States); Greenberg, D. [Ascension Technology, Inc., Lincoln Center, MA (United States)] [and others
1997-10-01
This paper discusses the reliability of several photovoltaic projects including SMUD`s PV Pioneer project, various projects monitored by Ascension Technology, and the Colorado Parks project. System times-to-failure range from 1 to 16 years, and maintenance costs range from 1 to 16 cents per kilowatt-hour. Factors contributing to the reliability of these systems are discussed, and practices are recommended that can be applied to future projects. This paper also discusses the methodology used to collect and analyze PV system reliability data.
何吉祥; 李晓波; 张昊
2012-01-01
In order to improve the reliability of the power supply and to ensure uninterrupted power supply of shooting range equipment,an automatic switch device of the auxiliary power supply is developed.Firstly,its composition structure and working principles are introduced;the general design thoughts and hardware design methods of the device are described in detail,mainly about the design of power plug-ins,master controller plug-ins,human-computer interaction plug-ins,current and voltage acquisition plug-ins,relay plug-in and human-machine interface plug-ins.Meanwhile software design method of the automatic switch device of the auxiliary power supply is researched;rapid and reliable power supply switching and a concise human-machine interface are realized.At last,the reliability of this device is discussed.%为了提高电源的可靠性,保证靶场设备的不间断供电,研究了一种备用电源自投装置。在介绍其组成结构和工作原理的基础上,对装置的总体设计思想和硬件设计方法进行了详细的描述,主要包括电源插件、主控制器插件、电流和电压采集插件、继电器插件、人机交互插件等关键环节的设计。同时对备用电源自投装置的软件设计方法做了研究,实现了主备电源的快速、可靠切换和简洁的人机界面。最后对该装置的可靠性问题进行了探讨。
Fagbeja, Mofoluso A; Hill, Jennifer L; Chatterton, Tim J; Longhurst, James W S
2015-02-01
An assessment of the reliability of the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) satellite sensor measurements to interpolate tropospheric concentrations of carbon monoxide considering the low-latitude climate of the Niger Delta region in Nigeria was conducted. Monthly SCIAMACHY carbon monoxide (CO) column measurements from January 2,003 to December 2005 were interpolated using ordinary kriging technique. The spatio-temporal variations observed in the reliability were based on proximity to the Atlantic Ocean, seasonal variations in the intensities of rainfall and relative humidity, the presence of dust particles from the Sahara desert, industrialization in Southwest Nigeria and biomass burning during the dry season in Northern Nigeria. Spatial reliabilities of 74 and 42 % are observed for the inland and coastal areas, respectively. Temporally, average reliability of 61 and 55 % occur during the dry and wet seasons, respectively. Reliability in the inland and coastal areas was 72 and 38 % during the wet season, and 75 and 46 % during the dry season, respectively. Based on the results, the WFM-DOAS SCIAMACHY CO data product used for this study is therefore relevant in the assessment of CO concentrations in developing countries within the low latitudes that could not afford monitoring infrastructure due to the required high costs. Although the SCIAMACHY sensor is no longer available, it provided cost-effective, reliable and accessible data that could support air quality assessment in developing countries.
Moody, John A.
2016-03-21
Extreme rainfall in September 2013 caused destructive floods in part of the Front Range in Boulder County, Colorado. Erosion from these floods cut roads and isolated mountain communities for several weeks, and large volumes of eroded sediment were deposited downstream, which caused further damage of property and infrastructures. Estimates of peak discharge for these floods and the associated rainfall characteristics will aid land and emergency managers in the future. Several methods (an ensemble) were used to estimate peak discharge at 21 measurement sites, and the ensemble average and standard deviation provided a final estimate of peak discharge and its uncertainty. Because of the substantial erosion and deposition of sediment, an additional estimate of peak discharge was made based on the flow resistance caused by sediment transport effects.Although the synoptic-scale rainfall was extreme (annual exceedance probability greater than 1,000 years, about 450 millimeters in 7 days) for these mountains, the resulting peak discharges were not. Ensemble average peak discharges per unit drainage area (unit peak discharge, [Qu]) for the floods were 1–2 orders of magnitude less than those for the maximum worldwide floods with similar drainage areas and had a wide range of values (0.21–16.2 cubic meters per second per square kilometer [m3 s-1 km-2]). One possible explanation for these differences was that the band of high-accumulation, high-intensity rainfall was narrow (about 50 kilometers wide), oriented nearly perpendicular to the predominant drainage pattern of the mountains, and therefore entire drainage areas were not subjected to the same range of extreme rainfall. A linear relation (coefficient of determination [R2]=0.69) between Qu and the rainfall intensity (ITc, computed for a time interval equal to the time-of-concentration for the drainage area upstream from each site), had the form: Qu=0.26(ITc-8.6), where the coefficient 0.26 can be considered to be an
Multidisciplinary System Reliability Analysis
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
惠寒青; 张玲莉; 喻大力; 王荣; 余竹生
2015-01-01
Objective To evaluate the test-retest reliability and validity of the 3-dimensional Digital Goniometer for Cervical (3DDGC) in measurement of cervical range of motion. Methods 39 healthy participants were measured of cervical range of motion 2 times in 1 hour with 3DDGC by one observer, and with cervical range of motion (CROM) device once. The intraclass correlation coefficient (ICC) of test-retest and the Pearson correlation coefficient between devices were caculated. The measurement errors were evaluated with standard er-ror of mean (SEM). Results The ICC was 0.89 of 3DDGC as the cervical rotation to the left, and it was 0.90-0.98 of the other directions, with the SEM of 2.07-3.85° . The Pearson correlation coefficient was 0.73-0.92, with the SEM of 1.66-3.17° . Conclusion 3DDGC is valid and reliable in test-retest in measuring cervical range of motion, which need more research clinically.%目的：评估新型颈椎三维电子关节角度仪测试结果的重复测试信度和效度。方法由同一测试者使用新型颈椎三维电子关节角度仪对39名健康受试者进行颈椎活动度测试，1 h内测试2次。采用美国颈椎活动测试仪(CROM)进行颈椎活动度测试。计算重复测试组内相关系数(ICC)；两种方法之间行Pearson相关分析。测试差异使用测试标准误(SEM)进行描述。结果重复测试旋左运动ICC=0.89，其他各个方向ICC=0.90~0.98；各个方向上重复测试SEM=2.07~3.85°。两种方法测试结果的相关系数R2=0.73~0.92，SEM=1.66~3.17°。结论新型颈椎三维电子关节角度仪有良好的重复测试信度，效度良好。仪器操作简便直观，可以进一步应用于临床研究。
Estimation of micro-motion parameters based on range vernier%基于游标测距的微动参数估计
朱得糠; 刘永祥; 霍凯; 黎湘
2011-01-01
精确地估计微动参数有利于对微动目标进行分类识别.本文根据目标微动在全相参脉冲多普勒雷达体制下的回波特点,提出了一种基于游标测距(Range Vernier)的微动参数估计方法.首先建立微动目标雷达回波模型,主要是进动目标回波模型.以某一回波脉冲为参考,采用游标测距技术测量后续回波脉冲接收时刻目标的距离,该距离与时间的关系反映了目标的运动规律,最后通过正弦基分解(Sin FM Basis Decomposition)的方法从测量结果中估计出微动参数,包括振幅、角频率和初始相位.参数估计过程中峰值搜索的范围由经验知识和雷达测量信息确定.算法性能分析推导了雷达测速误差、测相位误差以及脉冲重复频率(PRF)和载频之间的约束关系,以保证游标测距正常进行.仿真结果验证了在现有雷达体制和测量精度条件下,游标测距可以正常应用,并且微动参数估计的精度非常高.%It' s beneficial for the recognition and classification of targets to obtain precise estimation of micro-motion parameters. This paper proposed a new method to estimate micro-motion parameters using range vernier technique based on the echo characteristics of a point target in the coherent pulse Doppler radar system. First we establish a point target's radar echo model with micro-motion, actually the echo model of target with precession. Taking a given received pulse as reference point, the range change of target caused by movements at the successive received pulses can be measured accurately by using range vernier technique, and the relationship between range and time reflects the characteristics of target motion. At last, we utilize the Sin FM Basis Decomposition method to estimate the micro-motion parameters, such as amplitude, circular frequency and initial phase. During the process of estimating parameters, the scope of peak search is determined by empirical knowledge and
Cong, X.; Eineder, M.; Fritz, T.
2013-12-01
The accuracy and availability of deformation measurements using InSAR techniques is limited due to decorrelation effects, atmospheric disturbances and the SAR side-looking geometry (layover and shadowing). In this talk, we present our recent research and achievements on advanced InSAR techniques in order to retrieve reliable deformation signals from active volcanoes using high resolution TerraSAR-X (TSX) images. Another highlight of this talk is the evaluation of an experimental TanDEM-X (TDX) RawDEM with a resolution of approximately 6 m in order to compensate the topographic phase. A volcanic test site which is currently highly active -El Hierro- has been selected to demonstrate the developed techniques: 1) PSI processing in volcanic areas using high resolution TSX images; 2) Mitigation of atmospheric delay distortions; 3) Fusion of multi-geometrical PSI clouds. In order to measure the deformation from 2011 to 2013 at El Hierro [1], two stacks of stripmap TSX Mission data have been acquired, one in ascending orbit and one in descending. Each stack has more than 25 scenes. More than 1.5 million PSs have been detected (SCR>3.0 dB). The stratified atmospheric delay for each acquisition has been integrated for the PSI reference network and, afterwards, interpolated and compensated for all PSs. A linear deformation model has been assumed for PSI processing. For the descending orbit stack, a relative deformation from -21.7 to 131.8 mm/y from Sep. 2011 to Jan. 2013 with respect to a reference point located on the northeast coast has been measured. On the one hand, the spatial variation of the deformation has a good agreement with the seismicity distribution [1]. On the other hand, the deformation magnitude agrees with in-situ GPS measurements [2]. In ascending orbit, the linear deformation rate varies from -22.8 to 90.9 mm/y. This different range of value is due to a scene acquired on Feb. 2010, which has been included in order to obtain the pre-seismic deformation
Lazzaroni, Massimo
2012-01-01
This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be
师义民; 魏玲; 肖华勇
2001-01-01
运用最大似然法和经验Bayes方法，研究冷贮备串联系统可靠性指标的估计问题，分别给出 了该系统失效率、可靠度函数与平均寿命的点估计。最后利用随机模拟方法，对两种估计结 果进行了比较，结果表明，经验Bayes估计优于最大似然估计。%When applying FPA (failure probability analysis) to estimating the reliability performances in cold standby series system, it suffers from two shortcomings: (1) the parameters of life distribution for all elements must be known as constants ; (2) FPA is not quite satisfactory for small sample case. We overcome these two shortcomings by experimenting with EBE (empirical Bayes es timation) and MLE (maximum likelihood estimation). Before EBE can be used, we have to obtain first the Bayes estimation of the reliability performances for cold standby series system. Then we obtain the estimated reliablity performences by EBE and MLE. At last, we compare the MLE results with the EBE results by means of Monte-Carlo simulation. The results show that the accuracy of the EBE is better than that of the MLE. The method proposed in this paper can be used to analyze the reliabilities of cold standby series systems in mechanical and electrical appliances. This work rema ins to be done.
Examining the reliability of ADAS-Cog change scores.
Grochowalski, Joseph H; Liu, Ying; Siedlecki, Karen L
2016-09-01
The purpose of this study was to estimate and examine ways to improve the reliability of change scores on the Alzheimer's Disease Assessment Scale, Cognitive Subtest (ADAS-Cog). The sample, provided by the Alzheimer's Disease Neuroimaging Initiative, included individuals with Alzheimer's disease (AD) (n = 153) and individuals with mild cognitive impairment (MCI) (n = 352). All participants were administered the ADAS-Cog at baseline and 1 year, and change scores were calculated as the difference in scores over the 1-year period. Three types of change score reliabilities were estimated using multivariate generalizability. Two methods to increase change score reliability were evaluated: reweighting the subtests of the scale and adding more subtests. Reliability of ADAS-Cog change scores over 1 year was low for both the AD sample (ranging from .53 to .64) and the MCI sample (.39 to .61). Reweighting the change scores from the AD sample improved reliability (.68 to .76), but lengthening provided no useful improvement for either sample. The MCI change scores had low reliability, even with reweighting and adding additional subtests. The ADAS-Cog scores had low reliability for measuring change. Researchers using the ADAS-Cog should estimate and report reliability for their use of the change scores. The ADAS-Cog change scores are not recommended for assessment of meaningful clinical change.
叶宝娟; 温忠麟
2012-01-01
Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the
Goetz, Jason; Marcer, Marco; Bodin, Xavier; Brenning, Alexander
2017-04-01
Snow depth mapping in open areas using close range aerial imagery is just one of the many cases where developments in structure-from-motion and multi-view-stereo (SfM-MVS) 3D reconstruction techniques have been applied for geosciences - and with good reason. Our ability to increase the spatial resolution and frequency of observations may allow us to improve our understanding of how snow depth distribution varies through space and time. However, to ensure accurate snow depth observations from close range sensing we must adequately characterize the uncertainty related to our measurement techniques. In this study, we explore the spatial uncertainties of snow elevation models for estimation of snow depth in a complex alpine terrain from close range aerial imagery. We accomplish this by conducting repeat autonomous aerial surveys over a snow-covered active-rock glacier located in the French Alps. The imagery obtained from each flight of an unmanned aerial vehicle (UAV) is used to create an individual digital elevation model (DEM) of the snow surface. As result, we obtain multiple DEMs of the snow surface for the same site. These DEMs are obtained from processing the imagery with the photogrammetry software Agisoft Photoscan. The elevation models are also georeferenced within Photoscan using the geotagged imagery from an onboard GNSS in combination with ground targets placed around the rock glacier, which have been surveyed with highly accurate RTK-GNSS equipment. The random error associated with multi-temporal DEMs of the snow surface is estimated from the repeat aerial survey data. The multiple flights are designed to follow the same flight path and altitude above the ground to simulate the optimal conditions of repeat survey of the site, and thus try to estimate the maximum precision associated with our snow-elevation measurement technique. The bias of the DEMs is assessed with RTK-GNSS survey observations of the snow surface elevation of the area on and surrounding
Worrell, Frank C.; Mello, Zena R.
2007-01-01
In this study, the authors examined the reliability, structural validity, and concurrent validity of Zimbardo Time Perspective Inventory (ZTPI) scores in a group of 815 academically talented adolescents. Reliability estimates of the purported factors' scores were in the low to moderate range. Exploratory factor analysis supported a five-factor…
Reliability and Its Quantitative Measures
Alexandru ISAIC-MANIU
2010-01-01
Full Text Available In this article is made an opening for the software reliability issues, through wide-ranging statistical indicators, which are designed based on information collected from operating or testing (samples. It is developed the reliability issues also for the case of the main reliability laws (exponential, normal, Weibull, which validated for a particular system, allows the calculation of some reliability indicators with a higher degree of accuracy and trustworthiness
Hutchinson, T Paul; Anderson, Robert W G; Searson, Daniel J
2012-01-01
Tests are routinely conducted where instrumented headforms are projected at the fronts of cars to assess pedestrian safety. Better information would be obtained by accounting for performance over the range of expected impact conditions in the field. Moreover, methods will be required to integrate the assessment of secondary safety performance with primary safety systems that reduce the speeds of impacts. Thus, we discuss how to estimate performance over a range of impact conditions from performance in one test and how this information can be combined with information on the probability of different impact speeds to provide a balanced assessment of pedestrian safety. Theoretical consideration is given to 2 distinct aspects to impact safety performance: the test impact severity (measured by the head injury criterion, HIC) at a speed at which a structure does not bottom out and the speed at which bottoming out occurs. Further considerations are given to an injury risk function, the distribution of impact speeds likely in the field, and the effect of primary safety systems on impact speeds. These are used to calculate curves that estimate injuriousness for combinations of test HIC, bottoming out speed, and alternative distributions of impact speeds. The injuriousness of a structure that may be struck by the head of a pedestrian depends not only on the result of the impact test but also the bottoming out speed and the distribution of impact speeds. Example calculations indicate that the relationship between the test HIC and injuriousness extends over a larger range than is presently used by the European New Car Assessment Programme (Euro NCAP), that bottoming out at speeds only slightly higher than the test speed can significantly increase the injuriousness of an impact location and that effective primary safety systems that reduce impact speeds significantly modify the relationship between the test HIC and injuriousness. Present testing regimes do not take fully into
Sun, Haitao
2016-05-16
We propose a new methodology for the first-principles description of the electronic properties relevant for charge transport in organic molecular crystals. This methodology, which is based on the combination of a non-empirical, optimally tuned range-separated hybrid functional with the polarizable continuum model, is applied to a series of eight representative molecular semiconductor crystals. We show that it provides ionization energies, electron affinities, and transport gaps in very good agreement with experimental values as well as with the results of many-body perturbation theory within the GW approximation at a fraction of the computational costs. Hence, this approach represents an easily applicable and computationally efficient tool to estimate the gas-to-crystal-phase shifts of the frontier-orbital quasiparticle energies in organic electronic materials.
Gitterman, Y.; Kim, S. G.; Hofstetter, R.
2016-04-01
Three underground nuclear explosions, conducted by North Korea in 2006, 2009 and 2013, are analyzed. The last two tests were recorded by the Israel Seismic Network. Pronounced coherent minima (spectral nulls) at 1.2-1.3 Hz were revealed in the spectra of teleseismic P -waves. For a ground-truth explosion with a shallow source depth, this phenomenon can be interpreted in terms of the interference between the down-going P-wave and the pP phase reflected from the Earth's surface. This effect was also observed at ISN stations for a Pakistan nuclear explosion at a different frequency 1.7 Hz and the PNE Rubin-2 in West Siberia at 1 Hz, indicating a source-effect and not a site-effect. Similar spectral minima having essentially the same frequency, as at ISN, were observed in teleseismic P-waves for all the three North Korean explosions recorded at networks and arrays in Kazakhstan (KURK), Norway (NNSN), Australia (ASAR, WRA) and Canada (YKA), covering a broad azimuthal range. Data of 2009 and 2013 tests at WRA and KURK arrays showed harmonic spectral modulation with three multiple minima frequencies, evidencing the clear interference effect. These observations support the above-mentioned interpretation. Based on the null frequency dependency on the near-surface acoustic velocity and the source depth, the depth of the North Korean tests was estimated about 2.0-2.1 km. It was shown that the observed null frequencies and the obtained source depth estimates correspond to P- pP interference phenomena in both cases of a vertical shaft or a horizontal drift in a mountain. This unusual depth estimation needs additional validation based on more stations and verification by other methods.
2005-01-01
Full Text Available Empirical relationships to estimate vertical attenuation coefficient of photosynthetically available radiation (KPAR using Secchi disk, vertical black disk, and horizontal sighting ranges for San Quintín Bay, Baja California, were developed. Radiometric PAR profiles were used to calculate KPAR. Vertical (ZD and horizontal (HS sighting ranges were measured with white (Secchi depth or ZSD, HSW and black (ZBD, HSB targets. The empirical power models KPAR = 1.48 ZSD –1.16, KPAR = 0.87 ZBD –1.52, KPAR = 0.54 HSW –0.65 and KPAR = 0.53 HSB –0.92 were developed for the corresponding relationships. The parameters of these models are not significantly different from those of models developed for Punta Banda Estuary, another Baja California lagoon, with the exception of the one for the KPAR-HSW relationship. Also, parameters of the KPAR-ZSD model for San Quintín Bay and Punta Banda Estuary are not significantly different from those developed for coastal waters near Santa Barbara, California. A set of general models is proposed that may apply to coastal water bodies of northwestern Baja California and southern California (KPAR = 1.45 ZSD –1.10, KPAR = 0.92 ZBD –1.45, and KPAR = 0.70 HSB –1.10. While this approach may be universal, more data are needed to explore the variability of the parameters between different water bodies.
Storm surge and tidal range energy
Lewis, Matthew; Angeloudis, Athanasios; Robins, Peter; Evans, Paul; Neill, Simon
2017-04-01
The need to reduce carbon-based energy sources whilst increasing renewable energy forms has led to concerns of intermittency within a national electricity supply strategy. The regular rise and fall of the tide makes prediction almost entirely deterministic compared to other stochastic renewable energy forms; therefore, tidal range energy is often stated as a predictable and firm renewable energy source. Storm surge is the term used for the non-astronomical forcing of tidal elevation, and is synonymous with coastal flooding because positive storm surges can elevate water-levels above the height of coastal flood defences. We hypothesis storm surges will affect the reliability of the tidal range energy resource; with negative surge events reducing the tidal range, and conversely, positive surge events increasing the available resource. Moreover, tide-surge interaction, which results in positive storm surges more likely to occur on a flooding tide, will reduce the annual tidal range energy resource estimate. Water-level data (2000-2012) at nine UK tide gauges, where the mean tidal amplitude is above 2.5m and thus suitable for tidal-range energy development (e.g. Bristol Channel), were used to predict tidal range power with a 0D modelling approach. Storm surge affected the annual resource estimate by between -5% to +3%, due to inter-annual variability. Instantaneous power output were significantly affected (Normalised Root Mean Squared Error: 3%-8%, Scatter Index: 15%-41%) with spatial variability and variability due to operational strategy. We therefore find a storm surge affects the theoretical reliability of tidal range power, such that a prediction system may be required for any future electricity generation scenario that includes large amounts of tidal-range energy; however, annual resource estimation from astronomical tides alone appears sufficient for resource estimation. Future work should investigate water-level uncertainties on the reliability and
Chang Hee Jung
Full Text Available BACKGROUND: Compared to the golden standard glycation index of HbA1c, glycated albumin (GA has potentials for assessing insulin secretory dysfunction and glycemic fluctuation as well as predicting diabetic vascular complications. However, the reference ranges of GA and a conversion equation need to be clearly defined. We designed this study to determine the reference ranges in patients with normal glucose tolerance (NGT based on conventional measures of glycemic status and to devise a conversion equation for calculating HbA1c and GA in a Korean population. METHODOLOGY/PRINCIPAL FINDINGS: In this multicenter, retrospective, cross-sectional study, we recruited antidiabetic drug-naïve patients with available glycemic variables including HbA1c, GA, and fasting plasma glucose regardless of glucose status. For the reference interval of serum GA, 5th to 95th percentile value of GA in subjects with NGT was adopted. The conversion equation between HbA1c and GA was devised using an estimating regression model with unknown break-points method. The reference range for GA was 9.0-14.0% in 2043 subjects. The 95th percentile responding values for FPG, and HbA1c were approximately 5.49 mmol/l, and 5.6%, respectively. The significant glycemic turning points were 5.868% HbA1c and 12.2% GA. The proposed conversion equation for below and above the turning point were GA (% = 6.960+0.8963 × HbA1c (% and GA (% = -9.609+3.720 × HbA1c (%, respectively. CONCLUSIONS/SIGNIFICANCE: These results should be helpful in future studies on the clinical implications of high GA relative to HbA1c and the clinical implementation of diabetes management.
Adriano Andrejew Ferreira
2009-01-01
Full Text Available In this paper, two methods for assessing the degree of melanization of pupal exuviae from the butterfly Heliconius erato phyllis , Fabricius 1775 (Lepidoptera, Nymphalidae, Heliconiini are compared. In the first method, which was qualitative, the exuviae were classified by scoring the degree of melanization, whereas in the second method, which was quantitative, the exuviae were classified by optical density followed by analysis with appropriate software. The heritability (h 2 of the degree of melanization was estimated by regression and analysis of variance. The estimates of h 2 were similar with both methods, indicating that the qualitative method could be particularly suitable for field work. The low estimates obtained for heritability may have resulted from the small sample size ( n = 7-18 broods, including the parents or from the allocation-priority hypothesis in which pupal color would be a lower priority trait compared to morphological traits and adequate larval development.
杨阳; 邹佳航; 秦大同; 袁瑷辉
2016-01-01
The cutting unit is the most important component of the shearer, but it is also the part that produces failure easily. Aim at solving the shearer’s low reliability and poor adaptability, a new kind of high reliability electromechanical-hydraulic short-range cutting transmission system is designed. The system can realize buffering and damping with the load’s mutation and the roller speed can be controlled with the variation of coal seam. The mathematical models of the cutting motor, the pump control motor system and the accumulator are established. The simulation model of MG300 shearer cutting transmission system is established based on AMESim software. The efficiency and the performance of speed regulation and coping with the load’s mutation are analyzed by the simulation. The results show that the system can not only regulate roller speed well but also process the buffering and damping function and keep high transmission efficiency in the typical working conditions. Consequently, the system can improve shearer’s reliability and adaptability. The simulation results illustrate the effectiveness and feasibility of the program which lays the foundation for the design and optimization of the cutting transmission system of shearer.%截割部是滚筒采煤机最重要的组成部分，也是最易产生失效的部分。针对现有滚筒采煤机截割部可靠性低和适应性差的问题，设计一种高可靠性机电液短程截割传动系统，该系统实现了负载突变下的缓冲减振和煤层变化下的滚筒调速。建立截割电动机、泵控马达系统和蓄能器的数学模型，基于AMESim软件建立MG300采煤机截割传动系统的仿真模型并进行系统调速性能、应对突变工况性能仿真分析和效率分析，结果表明系统既能实现滚筒转速的良好调节又具有一定的缓冲减振功能并且在典型工况下有着较高的传递效率。因此，所设计系统能提高采煤机的
2017-01-17
convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. This report was cleared for public release...testing for reliability prediction of devices exhibiting multiple failure mechanisms. Also presented was an integrated accelerating and measuring ...13 Table 2 T, V, F and matrix versus measured FIT
Sandmo, Trond (ed.)
2012-07-01
The Norwegian emission inventory is a joint undertaking between the Climate and Pollution Agency1 and Statistics Norway. Statistics Norway is responsible for the collection and development of activity data, and emission figures are derived from models operated by Statistics Norway. The Climate and Pollution Agency is responsible for the emission factors, for providing data from specific industries and sources and for considering the quality, and assuring necessary updating, of emission models like, e.g., the road traffic model and calculation of methane emissions from landfills. Emission data are used for a range of national applications and for international reporting. The Climate and Pollution Agency is responsible for the Norwegian reporting to United Nations Framework Convention on Climate Change (UNFCCC) and to United Nations Economic Commission Europe (UN-ECE). This report documents the methodologies used in the Norwegian emission inventory of greenhouse gases (GHG), acidifying pollutants, heavy metals (HM) and persistent organic pollutants (POPs). The documentation will also serve as a part of the National Inventory Report submitted by Norway to the United Nations Framework Convention on Climate Change (UNFCCC), and as documentation of the reported emissions to UNECE for the pollutants restricted by CLRTAP (Convention on Long-Range Transboundary Air Pollution). LULUCF (land use, land-use change and forestry) is not considered in this report, see the National Inventory Report (Climate and Pollution Agency 2012) for documentation on this topic.This report replaces the previous documentation of the emission model (Sandmo 2011), and is the latest annually updated version of a report edited by Britta Hoem in 2005. The most important changes since last year's documentation are: Minor NOx emissions from production of rock wool, which previously not have been estimated, have been included, Some factors for estimation of N2O from agriculture have been altered
1978-09-15
Comparison of the 1978 projections of the Reliability Councils with those made the previous year indicates three major changes in electric utility planning: (1) a reduction in total capacity additions for the 10-year planning period, (2) a significant decrease in nuclear additions, and (3) a shift from oil and gas to coal as a source of primary energy. Nuclear capacity continues to far overshadow fossil-fuel capacity in the unit-size range 1000 MW and up, with the reverse true for unit sizes less than 1000 MW. Although the total 10-year new-unit capacity drops from 326,624 MW (1977 to 1986) to 308,017 (1978 to 1987), new capacity planned that would use coal as a primary energy source increases from 136,763 MW to 146,206 MW. Nuclear capacity, in terms of total new units projected for the two 10-year periods, decreases from 130,532 MW to 116,177 MW, and capacity with oil as the primary source drops from 32,837 MW to 21,072 MW. For 1977 to 1986, no capacity was planned with oil as a primary source and coal as an alternate fuel but for 1978 to 1987, 1220 MW of such capacity is projected. Therefore, the total new capacity projected that could use coal as a fuel (primary or alternate) is 147,426 MW. In addition, one 700-MW unit is planned for which the primary fuel will be a blend of coal and refuse. There is a decrease in the capacity planned that would use natural gas a a primary source, from 2,089 MW in 1977 to 1986 to 502 MW in 1978 to 1987.
Jensen, Jørgen Juncher
2015-01-01
For non-linear systems the estimation of fatigue damage under stochastic loadings can be rather time-consuming. Usually Monte Carlo simulation (MCS) is applied, but the coefficient-of-variation (COV) can be large if only a small set of simulations can be done due to otherwise excessive CPU time. ...... the COV. For a specific example dealing with stresses in a tendon in a tension leg platform the COV is thereby reduced by a factor of three....
Babak Mehmandoust
2014-03-01
Full Text Available The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K.
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2014-03-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).
Bartsch, R.R.
1995-09-01
Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.
Antonio Piacentino
2016-12-01
Full Text Available Cogeneration and trigeneration plants are widely recognized as promising technologies for increasing energy efficiency in buildings. However, their overall potential is scarcely exploited, due to the difficulties in achieving economic viability and the risk of investment related to uncertainties in future energy loads and prices. Several stochastic optimization models have been proposed in the literature to account for uncertainties, but these instruments share in a common reliance on user-defined probability functions for each stochastic parameter. Being such functions hard to predict, in this paper an analysis of the influence of erroneous estimation of the uncertain energy loads and prices on the optimal plant design and operation is proposed. With reference to a hotel building, a number of realistic scenarios is developed, exploring all the most frequent errors occurring in the estimation of energy loads and prices. Then, profit-oriented optimizations are performed for the examined scenarios, by means of a deterministic mixed integer linear programming algorithm. From a comparison between the achieved results, it emerges that: (i the plant profitability is prevalently influenced by the average “spark-spread” (i.e., ratio between electricity and fuel price and, secondarily, from the shape of the daily price profiles; (ii the “optimal sizes” of the main components are scarcely influenced by the daily load profiles, while they are more strictly related with the average “power to heat” and “power to cooling” ratios of the building.
Luis O LUCIFORA; Santiago A BARBINI; Edgardo E DI GICOMO; Juan A WAESSLE; Daniel E FIGUEROA
2015-01-01
The distribution of the planktivorous basking sharkCetorhinus maximus is influenced by zooplankton abundance at small scales and temperature at medium scales in the North Atlantic. Here, we estimate the distribution of basking sharks on South Atlantic continental shelves, and the relative importance of chlorophyll concentration, as a proxy for zooplankton abun-dance, and temperature in determining habitat suitability for basking sharks at large scales. We used maximum entropy (MaxEnt) and maximum likelihood (MaxLike) species distribution modelling to test three hypotheses: the distribution of basking sharks is determined by (1) temperature, (2) chlorophyll concentration, or (3) both chlorophyll and temperature, while considering other factors, such as oxygen and salinity. Off South America, basking shark habitat included subtropical, temperate and cool-temperate waters between approximately 20°S and 55°S. Off Africa, basking shark habitat was limited to cool-temperate waters off Nami-bia and southern South Africa. MaxLike models had a better fit than MaxEnt models. The best model included minimum chloro-phyll concentration, dissolved oxygen concentration, and sea surface temperature range, supporting hypothesis 3. However, of all variables included in the best model, minimum chlorophyll concentration had the highest influence on basking shark distribution. Unlike the North Atlantic distribution, the South Atlantic distribution of basking sharks includes subtropical and cool-temperate waters. This difference is explained by high minimum chlorophyll concentration off southern Brazil as compared to North Atlan-tic subtropical areas. Observations in other regions of the world support this conclusion. The highest habitat suitability for bask-ing sharks is located close to nearshore areas that experience high anthropogenic impact [Current Zoology 61 (5): 811–826, 2015].
Clark, E.; Wood, A.; Nijssen, B.; Newman, A. J.; Mendoza, P. A.
2016-12-01
The System for Hydrometeorological Applications, Research and Prediction (SHARP), developed at the National Center for Atmospheric Research (NCAR), University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation, is a fully automated ensemble prediction system for short-term to seasonal applications. It incorporates uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 plausible temperature and precipitation time series through the Sacramento/Snow-17 model. The forcing ensemble explicitly accounts for measurement and interpolation uncertainties in the development of gridded meteorological forcing time series. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. To select the IHCs that are most consistent with the observations, we employ a particle filter (PF) that weights IHC ensemble members based on observations of streamflow and SWE. These particles are then used to initialize ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS), generating a streamflow forecast ensemble. We test this method in two basins in the Pacific Northwest that are important for water resources management: 1) the Green River upstream of Howard Hanson Dam, and 2) the South Fork Flathead River upstream of Hungry Horse Dam. The first of these is characterized by mixed snow and rain, while the second is snow-dominated. The PF-based forecasts are compared to forecasts based on a single IHC (corresponding to median streamflow) paired with the full GEFS ensemble, and 2) the full IHC ensemble, without filtering, paired with the full GEFS ensemble. In addition to assessing improvements in the spread of IHCs, we perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts at 1
SOFTWARE RELIABILITY OF PROFICIENT ENACTMENT
B.Anni Princy
2014-07-01
Full Text Available A software reliability exemplary projects snags the random process as disillusionments which were the culmination yield of two progressions: emerging faults and initial state values. The predominant classification uses the logistic analysis effort function mounting efficient software on the real time dataset. The detriments of the logistic testing were efficaciously overcome by Pareto distribution. The estimated outline ventures the resolved technique for analyzing the suitable communities and the preeminent of fit for a software reliability progress model. Its constraints are predictable to evaluate the reliability of a software system. The future process will permit for software reliability estimations that can be used both as prominence Indicator, but also for planning and controlling resources, the development times based on the onslaught assignments of the efficient computing and reliable measurement of a software system was competent.
Dependent systems reliability estimation by structural reliability approach
Kostandyan, Erik; Sørensen, John Dalsgaard
2014-01-01
) and the component lifetimes follow some continuous and non-negative cumulative distribution functions. An illustrative example utilizing the proposed method is provided, where damage is modeled by a fracture mechanics approach with correlated components and a failure assessment diagram is applied for failure...... identification. Application of the proposed method can be found in many real world systems....
Rogier, Eric; Plucinski, Mateusz; Lucchi, Naomi; Mace, Kimberly; Chang, Michelle; Lemoine, Jean Frantz; Candrinho, Baltazar; Colborn, James; Dimbu, Rafael; Fortes, Filomeno; Udhayakumar, Venkatachalam; Barnwell, John
2017-01-01
Detection of histidine-rich protein 2 (HRP2) from the malaria parasite Plasmodium falciparum provides evidence for active or recent infection, and is utilized for both diagnostic and surveillance purposes, but current laboratory immunoassays for HRP2 are hindered by low sensitivities and high costs. Here we present a new HRP2 immunoassay based on antigen capture through a bead-based system capable of detecting HRP2 at sub-picogram levels. The assay is highly specific and cost-effective, allowing fast processing and screening of large numbers of samples. We utilized the assay to assess results of HRP2-based rapid diagnostic tests (RDTs) in different P. falciparum transmission settings, generating estimates for true performance in the field. Through this method of external validation, HRP2 RDTs were found to perform well in the high-endemic areas of Mozambique and Angola with 86.4% and 73.9% of persons with HRP2 in their blood testing positive by RDTs, respectively, and false-positive rates of 4.3% and 0.5%. However, in the low-endemic setting of Haiti, only 14.5% of persons found to be HRP2 positive by the bead assay were RDT positive. Additionally, 62.5% of Haitians showing a positive RDT test had no detectable HRP2 by the bead assay, likely indicating that these were false positive tests. In addition to RDT validation, HRP2 biomass was assessed for the populations in these different settings, and may provide an additional metric by which to estimate P. falciparum transmission intensity and measure the impact of interventions. PMID:28192523
Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components
Kostandyan, Erik; Sørensen, John Dalsgaard
2013-01-01
configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...
Gupta, Ajay; Guttikar, Swati; Patel, Yogesh; Shrivastav, Pranav S; Sanyal, Mallika
2015-04-01
A simple, precise, and rapid stable isotope dilution liquid chromatography-tandem mass spectrometry method has been developed and validated for the quantification of rilpivirine, a non-nucleoside reverse transcriptase inhibitor in human plasma. Rilpivirine and its deuterated analogue, rilpivirine-d6, used as an internal standard (IS) were quantitatively extracted by liquid-liquid extraction with methyl-tert-butyl ether and diethyl ether solvent mixture from 50 μL plasma. The chromatography was achieved on Gemini C18 (150 × 4.6 mm, 5 µm) analytical column in a run time of 2.2 min. The precursor → product ion transitions for rilpivirine (m/z 367.1 → 128.0) and IS (m/z 373.2 → 134.2) were monitored on a triple quadrupole mass spectrometer in the positive ionization mode. The linearity of the method was established in the concentration range of 0.5-200 ng/mL. The mean extraction recovery for rilpivirine (94.9%) and IS (99.9%) from spiked plasma samples was consistent and reproducible. The IS-normalized matrix factors for rilpivirine ranged from 0.98 to 1.02 across three quality controls. Bench top, freeze-thaw, wet extract, and long-term stability of rilpivirine was examined in spiked plasma samples. The application of the method was demonstrated by a bioequivalence study with 25 mg rilpivirine tablet formulation in 40 healthy subjects. The assay reproducibility was shown by reanalysis of 200 study samples and the % change in the concentration of repeat values from the original values was within ±15%.
Puechmaille, Sebastien J
2016-05-01
Inferences of population structure and more precisely the identification of genetically homogeneous groups of individuals are essential to the fields of ecology, evolutionary biology and conservation biology. Such population structure inferences are routinely investigated via the program structure implementing a Bayesian algorithm to identify groups of individuals at Hardy-Weinberg and linkage equilibrium. While the method is performing relatively well under various population models with even sampling between subpopulations, the robustness of the method to uneven sample size between subpopulations and/or hierarchical levels of population structure has not yet been tested despite being commonly encountered in empirical data sets. In this study, I used simulated and empirical microsatellite data sets to investigate the impact of uneven sample size between subpopulations and/or hierarchical levels of population structure on the detected population structure. The results demonstrated that uneven sampling often leads to wrong inferences on hierarchical structure and downward-biased estimates of the true number of subpopulations. Distinct subpopulations with reduced sampling tended to be merged together, while at the same time, individuals from extensively sampled subpopulations were generally split, despite belonging to the same panmictic population. Four new supervised methods to detect the number of clusters were developed and tested as part of this study and were found to outperform the existing methods using both evenly and unevenly sampled data sets. Additionally, a subsampling strategy aiming to reduce sampling unevenness between subpopulations is presented and tested. These results altogether demonstrate that when sampling evenness is accounted for, the detection of the correct population structure is greatly improved.
Shah, R; Worner, S P; Chapman, R B
2012-10-01
Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.
Saiz, P; Rocha, R; Andreeva, J
2007-01-01
We are offering a system to track the efficiency of different components of the GRID. We can study the performance of both the WMS and the data transfers At the moment, we have set different parts of the system for ALICE, ATLAS, CMS and LHCb. None of the components that we have developed are VO specific, therefore it would be very easy to deploy them for any other VO. Our main goal is basically to improve the reliability of the GRID. The main idea is to discover as soon as possible the different problems that have happened, and inform the responsible. Since we study the jobs and transfers issued by real users, we see the same problems that users see. As a matter of fact, we see even more problems than the end user does, since we are also interested in following up the errors that GRID components can overcome by themselves (like for instance, in case of a job failure, resubmitting the job to a different site). This kind of information is very useful to site and VO administrators. They can find out the efficien...
万东; 张忠会
2015-01-01
Based on the reliability, this paper presents a kind of estimation method integrating customer damage func-tion.The outage loss of customers in each type in the time of peak load is acquired based on the method of customer survey .Taking the factors such as the energy consumption ratio and load rating of various users into account, the com-posite function of the outage loss of customers is set up.The estimation method is proposed based on the effects of the reliability to users made by the elements in the different positions on the line.This method can provide a reference for the engineers to make economic decisions about network planning and construction.Finally, the test system of IEEE RBTS is used to illustrate the feasibility and effectiveness of this method.%提出了一种综合用户停电损失函数和基于可靠性计算的用户停电损失估算方法。文章以用户调查方法为基础，获得峰荷状态下各类用户的停电损失，同时考虑各类用户的用电量比例和负荷率等因素，建立综合用户停电损失函数。根据元件在线路上不同位置对用户可靠性的影响，提出了用户停电损失的估算方法。该方法对工程人员进行电网规划和建设时的经济决策具有一定的参考作用。最后，以IEEE RBTS为例说明了该方法的可行性和有效性。
Improving Power Converter Reliability
Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon
2014-01-01
The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...
Chronic wasting disease (CWD) is the naturally occurring transmissible spongiform encephalopathy (TSE) of captive and free ranging cervid ruminants. Rocky Mountain elk (Cervus elaphus nelsoni) are a free-ranging species of large cervid with a habitat that includes large US national parks. Minimally ...
Schumacher, Johannes; Christiansen, Jesper Riis
2015-01-01
Forests contribute to improve water quality, affect drinking water resources, and therefore influence water supply on a regional level. The forest canopy structure affects the retention of precipitation (Pr) in the canopy and hence the amount of water transferred to the forest floor termed canopy......-species broadleaf/coniferous and mixed forests) in Denmark were used to develop empirical models to estimate TF on a monthly, seasonal, and annual basis. This new approach offers the opportunity to greatly improve predictions of TF on catchment wide scales. Overall, results show that TF can be estimated by Pr...
Bio-optical algorithms have been applied to monitor water quality in surface water systems. Empirical algorithms, such as Ritchie (2008), Gons (2008), and Gilerson (2010), have been applied to estimate the chlorophyll-a (chl-a) concentrations. However, the performance of each algorithm severely degr...
Metelev, S. A.; Lvov, A. V.
2016-09-01
We propose a method for estimating potential interference immunity of radio reception in the multipath radio-communication channels. Using this method for the modified Watterson model of the decameter radio channel, we study the achievable interference immunity of devices with spatial signal processing.
Electronic parts reliability data 1997
Denson, William; Jaworski, Paul; Mahar, David
1996-01-01
This document contains reliability data on both commercial and military electronic components for use in reliability analyses. It contains failure rate data on integrated circuits, discrete semiconductors (diodes, transistors, optoelectronic devices), resistors, capacitors, and inductors/transformers, all of which were obtained from the field usage of electronic components. At 2,000 pages, the format of this document is the same as RIAC's popular NPRD document which contains reliability data on nonelectronic component and electronic assembly types. Data includes part descriptions, quality level, application environments, point estimates of failure rate, data sources, number of failures, total operating hours, miles, or cycles, and detailed part characteristics.
Ballesteros, Miguel; Weder, Ricardo
2016-04-01
The study of obstacle scattering for the Klein-Gordon equation in the presence of long-range magnetic potentials is addressed. Previous results of the authors are extended to the long-range case and the results the authors previously proved for high-momenta long-range scattering for the Schrödinger equation are brought to the relativistic scenario. It is shown that there are important differences between relativistic and non-relativistic scattering concerning the long range. In particular, it is proven that the electric potential can be recovered without assuming the knowledge of the long-range part of the magnetic potential, which has to be supposed in the non-relativistic case. The electric potential and the magnetic field are recovered from the high-momenta limit of the scattering operator, as well as fluxes modulo 2π around the handles of the obstacle. Moreover, it is proven that for every \\hat{{v}}\\in {{{S}}}2, {A}∞ (\\hat{{v}})+{A}∞ (-\\hat{{v}}) can be reconstructed, where {A}∞ is the long-range part of the magnetic potential. A simple formula for the high-momenta limit of the scattering operator is given in terms of magnetic fluxes over handles of the obstacle and long-range magnetic fluxes at infinity, that are introduced in this paper. The appearance of these long-range magnetic fluxes is a new effect in scattering theory. Research partially supported by the project PAPIIT-DGAPA UNAM IN102215.
Estimation of the Reliability of Plastic Slabs
Pirzada, G. B. : Ph.D.
and the concrete but in this thesis these material properties are modelled by stochastic variables. The probabilistic analysis performed in this thesis is mainly based on work by Thoft-Christensen & Baker (9) and Thoft-Christensen & Murotsu (10). Since considerable information about these basic materials used...... in reinforced concrete slabs and the loading is available it is highly probable that other aspects of slab failure, i.e. the punching shear failure and buckling failure, can be included in the probabilistic approach....
Reliability and construction control
Sherif S. AbdelSalam
2016-06-01
Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.
Vavrinevych O.P.; Omelchuk S.T.; Bardov V.G.
2013-01-01
The analysis of the range of pesticides, including fungicides, authorized for use in Ukraine and the scope of their application during 1999 - 2012 years was carried out. Statistical research methods were used in the analysis, evaluation of the results was car¬ried out in terms of the rate of growth and increase. It was determined that in the range structure of pesticides, authorized for use in Ukraine the largest share accounted for herbicides. Part of the herbicides was 43,8±0,95% on average...
A single-item measure of social identification: reliability, validity, and utility.
Postmes, Tom; Haslam, S Alexander; Jans, Lise
2013-12-01
This paper introduces a single-item social identification measure (SISI) that involves rating one's agreement with the statement 'I identify with my group (or category)' followed by a 7-point scale. Three studies provide evidence of the validity (convergent, divergent, and test-retest) of SISI with a broad range of social groups. Overall, the estimated reliability of SISI is good. To address the broader issue of single-item measure reliability, a meta-analysis of 16 widely used single-item measures is reported. The reliability of single-item scales ranges from low to reasonably high. Compared with this field, reliability of the SISI is high. In general, short measures struggle to achieve acceptable reliability because the constructs they assess are broad and heterogeneous. In the case of social identification, however, the construct appears to be sufficiently homogeneous to be adequately operationalized with a single item.
Paul F. Hessburg; Bradley G. Smith; R. Brion. Salter
1999-01-01
Using hierarchical clustering techniques, we grouped subwatersheds on the eastern slope of the Cascade Range in Washington State into ecological subregions by similarity of area in potential vegetation and climate attributes. We then built spatially continuous historical and current vegetation maps for 48 randomly selected subwatersheds from interpretations of 1938-49...
Estimation of Korean paddy field soil properties using optical reflectance
An optical sensing approach based on diffuse reflectance has shown potential for rapid and reliable on-site estimation of soil properties. Important sensing ranges and the resulting regression models useful for soil property estimation have been reported. In this study, a similar approach was applie...
Eugster, P.; Guerraoui, R.; Kouznetsov, P.
2001-01-01
This paper presents a new, non-binary measure of the reliability of broadcast algorithms, called Delta-Reliability. This measure quantifies the reliability of practical broadcast algorithms that, on the one hand, were devised with some form of reliability in mind, but, on the other hand, are not considered reliable according to the ``traditional'' notion of broadcast reliability [HT94]. Our specification of Delta-Reliability suggests a further step towards bridging the gap between theory and...
Reliability computation from reliability block diagrams
Chelson, P. O.; Eckstein, E. Y.
1975-01-01
Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.
Tausworthe, R. C.
1987-01-01
It is shown that a ranging receiver with a sufficient and reasonable number of correlators is competitive with the current sequential component ranging system by some 1.5 to 2.5 dB. The optimum transmitter code, the optimum receiver, and a near-maximum-lilelihood range-estimation algorithm are presented.
Ana Luiza de Oliveira Timbó
2012-12-01
Full Text Available Flow cytometry allows to estimate the DNA content of a large number of plants quickly. However, inadequate protocols can compromise the reliability of these estimates leading to variations in the values of DNA content the same species. The objective of this study was to propose an efficient protocol to estimate the DNA content of Brachiaria spp. genotypes with different ploidy levels using flow cytometry. We evaluated four genotypes (B. ruziziensis diploid and artificially tetraploidized; a tetraploid B. brizantha and a natural triploid hybrid, three buffer solutions (MgSO4, Galbraith and Tris-HCl and three species as internal reference standards (Raphanus sativus, Solanum lycopersicum e Pisum sativum. The variables measured were: histogram score (1-5, coefficient of variation and estimation of DNA content. The best combination for the analysis of Brachiaria spp. DNA content was the use of MgSO4 buffer with R. sativus as a internal reference standard. Genome sizes expressed in picograms of DNA are presented for all genotypes and the importance of the histogram score on the results reliability of DNA content analyses were discussed.A citometria de fluxo permite estimar o conteúdo de DNA de um grande número de plantas rapidamente. No entanto, protocolos inadequados podem comprometer a confiabilidade dessas estimativas, levando a variações nos valores de conteúdo de DNA para uma mesma espécie. Neste trabalho, objetivou-se propor um protocolo eficiente para a estimativa do conteúdo de DNA de genótipos de Brachiaria spp. com diferentes níveis de ploidia, utilizando a citometria de fluxo. Foram avaliados quatro genótipos (B. ruziziensis, diploide e tetraploidizada artificialmente; B. brizantha tetraploide e um híbrido natural triploide, 3 soluções tampões (MgSO4, Galbraith e Tris-HCl e três espécies como padrões de referência interno (Raphanus sativus, Solanum lycopersicum e Pisum sativum. As variáveis mensuradas foram: nota do
Reliability Analysis of DOOF for Weibull Distribution
陈文华; 崔杰; 樊小燕; 卢献彪; 相平
2003-01-01
Hierarchical Bayesian method for estimating the failure probability under DOOF by taking the quasi-Beta distribution as the prior distribution is proposed in this paper. The weighted Least Squares Estimate method was used to obtain the formula for computing reliability distribution parameters and estimating the reliability characteristic values under DOOF. Taking one type of aerospace electrical connector as an example, the correctness of the above method through statistical analysis of electrical connector accelerated life test data was verified.
Nakashima, Eiji
2015-07-01
Using the all solid cancer mortality data set of the Life Span Study (LSS) cohort from 1950 to 2003 (LSS Report 14) data among atomic bomb survivors, excess relative risk (ERR) statistical analyses were performed using the second degree polynomial and the threshold and restricted cubic spline (RCS) dose response models. For the RCS models with 3 to 7 knots of equally spaced percentiles with margins in the dose range greater than 50 mGy, the dose response was assumed to be linear at less than 70 to 90 mGy. Due to the skewed dose distribution of atomic bomb survivors, the current knot system for the RCS analysis results in a detailed depiction of the dose response as less than approximately 0.5 Gy. The 6 knot RCS models for the all-solid cancer mortality dose response of the whole dose or less than 2 Gy were selected with the AIC model selection criterion and fit significantly better (p < 0.05) than the linear (L) model. The usual RCS includes the L-global model but not the quadratic (Q) nor linear-quadratic (LQ) global models. The authors extended the RCS to include L or LQ global models by putting L or LQ constraints on the cubic spline in the lower and upper tails, and the best RCS model selected with AIC criterion was the usual RCS with L-constraints in both the lower and upper tails. The selected RCS had a linear dose-response model in the lower dose range (i.e., < 0.2-0.3 Gy) and was compatible with the linear no-threshold (LNT) model in this dose range. The proposed method is also useful in describing the dose response of a specific cancer or non-cancer disease incidence/mortality.
Error in the estimation of intellectual ability in the low range using\\ud the WISC-IV and WAIS-III
Whitaker, Simon
2010-01-01
The error, both chance and systematic, in the measure of true intellectual ability in the low IQ range is quantified and combined to find an overall confidence interval. The chance error was due to: lack of stability, scorer error and lack of internal consistency. The systematic error was due to: the Flynn effect, a floor effect and that error apparent from the lack of agreement between the WISC-IV and WAIS-III. For low Full Scale IQs the WAIS-III can only be considered accurate to within 18 ...