WorldWideScience

Sample records for surface error distribution

  1. Strategy of restraining ripple error on surface for optical fabrication.

    Science.gov (United States)

    Wang, Tan; Cheng, Haobo; Feng, Yunpeng; Tam, Honyuen

    2014-09-10

    The influence from the ripple error to the high imaging quality is effectively reduced by restraining the ripple height. A method based on the process parameters and the surface error distribution is designed to suppress the ripple height in this paper. The generating mechanism of the ripple error is analyzed by polishing theory with uniform removal character. The relation between the processing parameters (removal functions, pitch of path, and dwell time) and the ripple error is discussed through simulations. With these, the strategy for diminishing the error is presented. A final process is designed and demonstrated on K9 work-pieces using the optimizing strategy with magnetorheological jet polishing. The form error on the surface is decreased from 0.216λ PV (λ=632.8  nm) and 0.039λ RMS to 0.03λ PV and 0.004λ RMS. And the ripple error is restrained well at the same time, because the ripple height is less than 6 nm on the final surface. Results indicate that these strategies are suitable for high-precision optical manufacturing.

  2. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    Science.gov (United States)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  3. The distribution of wind power forecast errors from operational systems

    Energy Technology Data Exchange (ETDEWEB)

    Hodge, Bri-Mathias; Ela, Erik; Milligan, Michael

    2011-07-01

    Wind power forecasting is one important tool in the integration of large amounts of renewable generation into the electricity system. Wind power forecasts from operational systems are not perfect, and thus, an understanding of the forecast error distributions can be important in system operations. In this work, we examine the errors from operational wind power forecasting systems, both for a single wind plant and for an entire interconnection. The resulting error distributions are compared with the normal distribution and the distribution obtained from the persistence forecasting model at multiple timescales. A model distribution is fit to the operational system forecast errors and the potential impact on system operations highlighted through the generation of forecast confidence intervals. (orig.)

  4. MATLAB implementation of satellite positioning error overbounding by generalized Pareto distribution

    Science.gov (United States)

    Ahmad, Khairol Amali; Ahmad, Shahril; Hashim, Fakroul Ridzuan

    2018-02-01

    In the satellite navigation community, error overbound has been implemented in the process of integrity monitoring. In this work, MATLAB programming is used to implement the overbounding of satellite positioning error CDF. Using a trajectory of reference, the horizontal position errors (HPE) are computed and its non-parametric distribution function is given by the empirical Cumulative Distribution Function (ECDF). According to the results, these errors have a heavy-tailed distribution. Sınce the ECDF of the HPE in urban environment is not Gaussian distributed, the ECDF is overbound with the CDF of the generalized Pareto distribution (GPD).

  5. Influence of random setup error on dose distribution

    International Nuclear Information System (INIS)

    Zhai Zhenyu

    2008-01-01

    Objective: To investigate the influence of random setup error on dose distribution in radiotherapy and determine the margin from ITV to PTV. Methods: A random sample approach was used to simulate the fields position in target coordinate system. Cumulative effect of random setup error was the sum of dose distributions of all individual treatment fractions. Study of 100 cumulative effects might get shift sizes of 90% dose point position. Margins from ITV to PTV caused by random setup error were chosen by 95% probability. Spearman's correlation was used to analyze the influence of each factor. Results: The average shift sizes of 90% dose point position was 0.62, 1.84, 3.13, 4.78, 6.34 and 8.03 mm if random setup error was 1,2,3,4,5 and 6 mm,respectively. Univariate analysis showed the size of margin was associated only by the size of random setup error. Conclusions: Margin of ITV to PTV is 1.2 times random setup error for head-and-neck cancer and 1.5 times for thoracic and abdominal cancer. Field size, energy and target depth, unlike random setup error, have no relation with the size of the margin. (authors)

  6. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  7. Wind and load forecast error model for multiple geographically distributed forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Makarov, Yuri V.; Reyes-Spindola, Jorge F.; Samaan, Nader; Diao, Ruisheng; Hafen, Ryan P. [Pacific Northwest National Laboratory, Richland, WA (United States)

    2010-07-01

    The impact of wind and load forecast errors on power grid operations is frequently evaluated by conducting multi-variant studies, where these errors are simulated repeatedly as random processes based on their known statistical characteristics. To simulate these errors correctly, we need to reflect their distributions (which do not necessarily follow a known distribution law), standard deviations. auto- and cross-correlations. For instance, load and wind forecast errors can be closely correlated in different zones of the system. This paper introduces a new methodology for generating multiple cross-correlated random processes to produce forecast error time-domain curves based on a transition probability matrix computed from an empirical error distribution function. The matrix will be used to generate new error time series with statistical features similar to observed errors. We present the derivation of the method and some experimental results obtained by generating new error forecasts together with their statistics. (orig.)

  8. Error Control in Distributed Node Self-Localization

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2008-03-01

    Full Text Available Location information of nodes in an ad hoc sensor network is essential to many tasks such as routing, cooperative sensing, and service delivery. Distributed node self-localization is lightweight and requires little communication overhead, but often suffers from the adverse effects of error propagation. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where localization error comes from and how it propagates from node to node. To prevent error from propagating and accumulating, we develop an error-control mechanism based on characterization of node uncertainties and discrimination between neighboring nodes. The error-control mechanism uses only local knowledge and is fully decentralized. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation for both range and directional sensors. It greatly improves localization accuracy and robustness.

  9. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  10. Image defects from surface and alignment errors in grazing incidence telescopes

    Science.gov (United States)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  11. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  12. Errors in determination of irregularity factor for distributed parameters in a reactor core

    International Nuclear Information System (INIS)

    Vlasov, V.A.; Zajtsev, M.P.; Il'ina, L.I.; Postnikov, V.V.

    1988-01-01

    Two types errors (measurement error and error of regulation of reactor core distributed parameters), offen met during high-power density reactor operation, are analyzed. Consideration is given to errors in determination of irregularity factor for radial power distribution for a hot channel under conditions of its minimization and for the conditions when the regulation of relative power distribution is absent. The first regime is investigated by the method of statistic experiment using the program of neutron-physical calculation optimization taking as an example a large channel water cooled graphite moderated reactor. It is concluded that it is necessary, to take into account the complex interaction of measurement error with the error of parameter profiling over the core both for conditions of continuous manual or automatic parameter regulation (optimization) and for the conditions without regulation namely at a priore equalized distribution. When evaluating the error of distributed parameter control

  13. EXPERIMENTAL VALIDATION OF CUMULATIVE SURFACE LOCATION ERROR FOR TURNING PROCESSES

    Directory of Open Access Journals (Sweden)

    Adam K. Kiss

    2016-02-01

    Full Text Available The aim of this study is to create a mechanical model which is suitable to investigate the surface quality in turning processes, based on the Cumulative Surface Location Error (CSLE, which describes the series of the consecutive Surface Location Errors (SLE in roughing operations. In the established model, the investigated CSLE depends on the currently and the previously resulted SLE by means of the variation of the width of cut. The phenomenon of the system can be described as an implicit discrete map. The stationary Surface Location Error and its bifurcations were analysed and flip-type bifurcation was observed for CSLE. Experimental verification of the theoretical results was carried out.

  14. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  15. Wind Power Forecasting Error Distributions: An International Comparison

    DEFF Research Database (Denmark)

    Hodge, Bri-Mathias; Lew, Debra; Milligan, Michael

    2012-01-01

    Wind power forecasting is essential for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that may occur is a critical factor for system operation functions, such as the setting of operating reserve...... levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations....

  16. Quantifying spatial distribution of snow depth errors from LiDAR using Random Forests

    Science.gov (United States)

    Tinkham, W.; Smith, A. M.; Marshall, H.; Link, T. E.; Falkowski, M. J.; Winstral, A. H.

    2013-12-01

    There is increasing need to characterize the distribution of snow in complex terrain using remote sensing approaches, especially in isolated mountainous regions that are often water-limited, the principal source of terrestrial freshwater, and sensitive to climatic shifts and variations. We apply intensive topographic surveys, multi-temporal LiDAR, and Random Forest modeling to quantify snow volume and characterize associated errors across seven land cover types in a semi-arid mountainous catchment at a 1 and 4 m spatial resolution. The LiDAR-based estimates of both snow-off surface topology and snow depths were validated against ground-based measurements across the catchment. Comparison of LiDAR-derived snow depths to manual snow depth surveys revealed that LiDAR based estimates were more accurate in areas of low lying vegetation such as shrubs (RMSE = 0.14 m) as compared to areas consisting of tree cover (RMSE = 0.20-0.35 m). The highest errors were found along the edge of conifer forests (RMSE = 0.35 m), however a second conifer transect outside the catchment had much lower errors (RMSE = 0.21 m). This difference is attributed to the wind exposure of the first site that led to highly variable snow depths at short spatial distances. The Random Forest modeled errors deviated from the field measured errors with a RMSE of 0.09-0.34 m across the different cover types. Results show that snow drifts, which are important for maintaining spring and summer stream flows and establishing and sustaining water-limited plant species, contained 30 × 5-6% of the snow volume while only occupying 10% of the catchment area similar to findings by prior physically-based modeling approaches. This study demonstrates the potential utility of combining multi-temporal LiDAR with Random Forest modeling to quantify the distribution of snow depth with a reasonable degree of accuracy. Future work could explore the utility of Terrestrial LiDAR Scanners to produce validation of snow-on surface

  17. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  18. Crowdsourcing for error detection in cortical surface delineations.

    Science.gov (United States)

    Ganz, Melanie; Kondermann, Daniel; Andrulis, Jonas; Knudsen, Gitte Moos; Maier-Hein, Lena

    2017-01-01

    With the recent trend toward big data analysis, neuroimaging datasets have grown substantially in the past years. While larger datasets potentially offer important insights for medical research, one major bottleneck is the requirement for resources of medical experts needed to validate automatic processing results. To address this issue, the goal of this paper was to assess whether anonymous nonexperts from an online community can perform quality control of MR-based cortical surface delineations derived by an automatic algorithm. So-called knowledge workers from an online crowdsourcing platform were asked to annotate errors in automatic cortical surface delineations on 100 central, coronal slices of MR images. On average, annotations for 100 images were obtained in less than an hour. When using expert annotations as reference, the crowd on average achieves a sensitivity of 82 % and a precision of 42 %. Merging multiple annotations per image significantly improves the sensitivity of the crowd (up to 95 %), but leads to a decrease in precision (as low as 22 %). Our experiments show that the detection of errors in automatic cortical surface delineations generated by anonymous untrained workers is feasible. Future work will focus on increasing the sensitivity of our method further, such that the error detection tasks can be handled exclusively by the crowd and expert resources can be focused on error correction.

  19. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    Science.gov (United States)

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  20. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    Science.gov (United States)

    Ma, H.-Y.; Klein, S. A.; Xie, S.; Zhang, C.; Tang, S.; Tang, Q.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.; Ahlgrimm, M.; Berg, L. K.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Liu, Y.; Merryfield, W.; Qian, Y.; Roehrig, R.; Wang, Y.-C.

    2018-03-01

    Many weather forecast and climate models simulate warm surface air temperature (T2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions of surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.

  1. Error Distributions on Large Entangled States with Non-Markovian Dynamics

    DEFF Research Database (Denmark)

    McCutcheon, Dara; Lindner, Netanel H.; Rudolph, Terry

    2014-01-01

    We investigate the distribution of errors on a computationally useful entangled state generated via the repeated emission from an emitter undergoing strongly non-Markovian evolution. For emitter-environment coupling of pure-dephasing form, we show that the probability that a particular patten...... of errors occurs has a bound of Markovian form, and thus, accuracy threshold theorems based on Markovian models should be just as effective. Beyond the pure-dephasing assumption, though complicated error structures can arise, they can still be qualitatively bounded by a Markovian error model....

  2. Airborne LIDAR borsight error calibration based on surface coincide

    International Nuclear Information System (INIS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Li, Dong; Qi, Zengying; Qiu, Wen; Tan, Junxiang

    2014-01-01

    Light Detection and Ranging (LIDAR) is a system which can directly collect three-dimensional coordinate information of ground point and laser reflection strength information. With the wide application of LIDAR system, users hope to get more accurate results. Boresight error has an important effect on data accuracy and thus, it is thought that eliminating the error is very important. In recent years, many methods have been proposed to eliminate the error. Generally, they can be categorized into tie point method and surface matching method. In this paper, we propose another method called try value method based on surface coincide that is used in actual production by many companies. The method is simple and operable. Further, the efficacy of the method was demonstrated by analyzing the data from Zhangye city

  3. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  4. Thresholds of surface codes on the general lattice structures suffering biased error and loss

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Fujii, Keisuke

    2014-01-01

    A family of surface codes with general lattice structures is proposed. We can control the error tolerances against bit and phase errors asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilbert-Varshamov bound. We find that the error tolerance of the surface codes depends on the connectivity of the underlying lattices; the error chains on a lattice of lower connectivity are easier to correct. On the other hand, the loss tolerance of the surface codes exhibits an opposite behavior; the logical information on a lattice of higher connectivity has more robustness against qubit loss. As a result, we come upon a fundamental trade-off between error and loss tolerances in the family of surface codes with different lattice geometries

  5. Thresholds of surface codes on the general lattice structures suffering biased error and loss

    Energy Technology Data Exchange (ETDEWEB)

    Tokunaga, Yuuki [NTT Secure Platform Laboratories, NTT Corporation, 3-9-11 Midori-cho, Musashino, Tokyo 180-8585, Japan and Japan Science and Technology Agency, CREST, 5 Sanban-cho, Chiyoda-ku, Tokyo 102-0075 (Japan); Fujii, Keisuke [Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan)

    2014-12-04

    A family of surface codes with general lattice structures is proposed. We can control the error tolerances against bit and phase errors asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilbert-Varshamov bound. We find that the error tolerance of the surface codes depends on the connectivity of the underlying lattices; the error chains on a lattice of lower connectivity are easier to correct. On the other hand, the loss tolerance of the surface codes exhibits an opposite behavior; the logical information on a lattice of higher connectivity has more robustness against qubit loss. As a result, we come upon a fundamental trade-off between error and loss tolerances in the family of surface codes with different lattice geometries.

  6. Relating Tropical Cyclone Track Forecast Error Distributions with Measurements of Forecast Uncertainty

    Science.gov (United States)

    2016-03-01

    CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS WITH MEASUREMENTS OF FORECAST UNCERTAINTY by Nicholas M. Chisler March 2016 Thesis Advisor...March 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE RELATING TROPICAL CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS...WITH MEASUREMENTS OF FORECAST UNCERTAINTY 5. FUNDING NUMBERS 6. AUTHOR(S) Nicholas M. Chisler 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  7. Robust D-optimal designs under correlated error, applicable invariantly for some lifetime distributions

    International Nuclear Information System (INIS)

    Das, Rabindra Nath; Kim, Jinseog; Park, Jeong-Soo

    2015-01-01

    In quality engineering, the most commonly used lifetime distributions are log-normal, exponential, gamma and Weibull. Experimental designs are useful for predicting the optimal operating conditions of the process in lifetime improvement experiments. In the present article, invariant robust first-order D-optimal designs are derived for correlated lifetime responses having the above four distributions. Robust designs are developed for some correlated error structures. It is shown that robust first-order D-optimal designs for these lifetime distributions are always robust rotatable but the converse is not true. Moreover, it is observed that these designs depend on the respective error covariance structure but are invariant to the above four lifetime distributions. This article generalizes the results of Das and Lin [7] for the above four lifetime distributions with general (intra-class, inter-class, compound symmetry, and tri-diagonal) correlated error structures. - Highlights: • This paper presents invariant robust first-order D-optimal designs under correlated lifetime responses. • The results of Das and Lin [7] are extended for the four lifetime (log-normal, exponential, gamma and Weibull) distributions. • This paper also generalizes the results of Das and Lin [7] to more general correlated error structures

  8. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  9. ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    C. Li

    2012-07-01

    Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  10. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    Science.gov (United States)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  11. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    Science.gov (United States)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  12. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  13. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi

    2014-01-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both

  14. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  15. Correlated Errors in the Surface Code

    Science.gov (United States)

    Lopez, Daniel; Mucciolo, E. R.; Novais, E.

    2012-02-01

    A milestone step into the development of quantum information technology would be the ability to design and operate a reliable quantum memory. The greatest obstacle to create such a device has been decoherence due to the unavoidable interaction between the quantum system and its environment. Quantum Error Correction is therefore an essential ingredient to any quantum computing information device. A great deal of attention has been given to surface codes, since it has very good scaling properties. In this seminar, we discuss the time evolution of a qubit encoded in the logical basis of a surface code. The system is interacting with a bosonic environment at zero temperature. Our results show how much spatial and time correlations can be detrimental to the efficiency of the code.

  16. Uncertainties of predictions from parton distributions 1, experimental errors

    CERN Document Server

    Martin, A D; Stirling, William James; Thorne, R S; CERN. Geneva

    2003-01-01

    We determine the uncertainties on observables arising from the errors on the experimental data that are fitted in the global MRST2001 parton analysis. By diagonalizing the error matrix we produce sets of partons suitable for use within the framework of linear propagation of errors, which is the most convenient method for calculating the uncertainties. Despite the potential limitations of this approach we find that it can be made to work well in practice. This is confirmed by our alternative approach of using the more rigorous Lagrange multiplier method to determine the errors on physical quantities directly. As particular examples we determine the uncertainties on the predictions of the charged-current deep-inelastic structure functions, on the cross-sections for W production and for Higgs boson production via gluon--gluon fusion at the Tevatron and the LHC, on the ratio of W-minus to W-plus production at the LHC and on the moments of the non-singlet quark distributions. We discuss the corresponding uncertain...

  17. Influence of Daily Set-Up Errors on Dose Distribution During Pelvis Radiotherapy

    International Nuclear Information System (INIS)

    Kasabasic, M.; Ivkovic, A.; Faj, D.; Rajevac, V.; Sobat, H.; Jurkovic, S.

    2011-01-01

    An external beam radiotherapy (EBRT) using megavoltage beam of linear accelerator is usually the treatment of choice for the cancer patients. The goal of EBRT is to deliver the prescribed dose to the target volume, with as low as possible dose to the surrounding healthy tissue. A large number of procedures and different professions involved in radiotherapy process, uncertainty of equipment and daily patient set-up errors can cause a difference between the planned and delivered dose. We investigated a part of this difference caused by daily patient set-up errors. Daily set-up errors for 35 patients were measured. These set-up errors were simulated on 5 patients, using 3D treatment planning software XiO (CMS Inc., St. Louis, MO). The differences in dose distributions between the planned and shifted ''geometry'' were investigated. Additionally, an influence of the error on treatment plan selection was checked by analyzing the change in dose volume histograms, planning target volume conformity index (CI P TV) and homogeneity index (HI). Simulations showed that patient daily set-up errors can cause significant differences between the planned and actual dose distributions. Moreover, for some patients those errors could influence the choice of treatment plan since CI P TV fell under 97 %. Surprisingly, HI was not as sensitive as CI P TV on set-up errors. The results showed the need for minimizing daily set-up errors by quality assurance programme. (author)

  18. Error Decomposition and Adaptivity for Response Surface Approximations from PDEs with Parametric Uncertainty

    KAUST Repository

    Bryant, C. M.; Prudhomme, S.; Wildey, T.

    2015-01-01

    In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.

  19. ERROR BOUNDS FOR SURFACE AREA ESTIMATORS BASED ON CROFTON’S FORMULA

    Directory of Open Access Journals (Sweden)

    Markus Kiderlen

    2011-05-01

    Full Text Available According to Crofton's formula, the surface area S(A of a sufficiently regular compact set A in Rd is proportional to the mean of all total projections pA (u on a linear hyperplane with normal u, uniformly averaged over all unit vectors u. In applications, pA (u is only measured in k directions and the mean is approximated by a finite weighted sum bS(A of the total projections in these directions. The choice of the weights depends on the selected quadrature rule. We define an associated zonotope Z (depending only on the projection directions and the quadrature rule, and show that the relative error bS (A/S (A is bounded from below by the inradius of Z and from above by the circumradius of Z. Applying a strengthened isoperimetric inequality due to Bonnesen, we show that the rectangular quadrature rule does not give the best possible error bounds for d =2. In addition, we derive asymptotic behavior of the error (with increasing k in the planar case. The paper concludes with applications to surface area estimation in design-based digital stereology where we show that the weights due to Bonnesen's inequality are better than the usual weights based on the rectangular rule and almost optimal in the sense that the relative error of the surface area estimator is very close to the minimal error.

  20. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  1. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  2. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    Science.gov (United States)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  3. Principal distance constraint error diffusion algorithm for homogeneous dot distribution

    Science.gov (United States)

    Kang, Ki-Min; Kim, Choon-Woo

    1999-12-01

    The perceived quality of the halftoned image strongly depends on the spatial distribution of the binary dots. Various error diffusion algorithms have been proposed for realizing the homogeneous dot distribution in the highlight and shadow regions. However, they are computationally expensive and/or require large memory space. This paper presents a new threshold modulated error diffusion algorithm for the homogeneous dot distribution. The proposed method is applied exactly same as the Floyd-Steinberg's algorithm except the thresholding process. The threshold value is modulated based on the difference between the distance to the nearest minor pixel, `minor pixel distance', and the principal distance. To do so, calculation of the minor pixel distance is needed for every pixel. But, it is quite time consuming and requires large memory resources. In order to alleviate this problem, `the minor pixel offset array' that transforms the 2D history of minor pixels into the 1D codes is proposed. The proposed algorithm drastically reduces the computational load and memory spaces needed for calculation of the minor pixel distance.

  4. Orbit error characteristic and distribution of TLE using CHAMP orbit data

    Science.gov (United States)

    Xu, Xiao-li; Xiong, Yong-qing

    2018-02-01

    Space object orbital covariance data is required for collision risk assessments, but publicly accessible two line element (TLE) data does not provide orbital error information. This paper compared historical TLE data and GPS precision ephemerides of CHAMP to assess TLE orbit accuracy from 2002 to 2008, inclusive. TLE error spatial variations with longitude and latitude were calculated to analyze error characteristics and distribution. The results indicate that TLE orbit data are systematically biased from the limited SGP4 model. The biases can reach the level of kilometers, and the sign and magnitude are correlate significantly with longitude.

  5. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    Science.gov (United States)

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  6. Practical Calculation of Thermal Deformation and Manufacture Error uin Surface Grinding

    Institute of Scientific and Technical Information of China (English)

    周里群; 李玉平

    2002-01-01

    The paper submits a method to calculate thermal deformation and manufacture error in surface grinding.The author established a simplified temperature field model.and derived the thermal deformaiton of the ground workpiece,It is found that there exists not only a upwarp thermal deformation,but also a parallel expansion thermal deformation.A upwarp thermal deformation causes a concave shape error on the profile of the workpiece,and a parallel expansion thermal deformation causes a dimension error in height.The calculations of examples are given and compared with presented experiment data.

  7. The dose distribution and DVH change analysis wing to effect of the patient setup error

    International Nuclear Information System (INIS)

    Kim, Kyung Tae; Ju, Sang Gyu; Ahn, Jae Hong; Park, Young Hwan

    2004-01-01

    The setup error due to the patient and the staff from radiation treatment as the reason which is important the treatment record could be decided is a possibility of effect. The SET-UP ERROR of the patient analyzes the effect of dose distribution and DVH from radiation treatment of the patient. This test uses human phantom and when C-T scan doing, It rotated the Left direction of the human phantom and it made SET-UP ERROR, Standard plan and 3 mm, 5 mm, 7 mm, 10 mm, 15 mm, 20 mm with to distinguish, it made the C-T scan error. With the result, The SET-UP ERROR got each C-T image Using RTP equipment It used the plan which is used generally from clinical - Box plan, 3 Dimension plan( identical angle 5beam plan) Also, ( CTV+1cm margin, CTV+0.5cm margin, CTV+0.3,cm margin = PTV) it distinguished the standard plan and each set-up error plan and the plan used a dose distribution and the DVH and it analyzed. The Box 4 the plan and 3 Dimension plan which it bites it got similar an dose distribution and DVH in 3 mm, 5 mm From rotation error and Rectilinear movement (0%-2%). Rotation error and rectilinear error 7 mm, 10 mm, 15 mm, 20 mm appeared effect it will go mad to a enough change in treatment (2%-11%) The diminishes the effect of the SET-UP ERROR must reduce move with tension of the patient Also, we are important accessory development and the supply that it reducing of reproducibility and the move.

  8. Error-resistant distributed quantum computation in a trapped ion chain

    International Nuclear Information System (INIS)

    Braungardt, Sibylle; Sen, Aditi; Sen, Ujjwal; Lewenstein, Maciej

    2007-01-01

    We consider experimentally feasible chains of trapped ions with pseudospin 1/2 and find models that can potentially be used to implement error-resistant quantum computation. Similar in spirit to classical neural networks, the error resistance of the system is achieved by encoding the qubits distributed over the whole system. We therefore call our system a quantum neural network and present a quantum neural network model of quantum computation. Qubits are encoded in a few quasi degenerated low-energy levels of the whole system, separated by a large gap from the excited states and large energy barriers between themselves. We investigate protocols for implementing a universal set of quantum logic gates in the system by adiabatic passage of a few low-lying energy levels of the whole system. Naturally appearing and potentially dangerous distributed noise in the system leaves the fidelity of the computation virtually unchanged, if it is not too strong. The computation is also naturally resilient to local perturbations of the spins

  9. Distribution of the Discretization and Algebraic Error in Numerical Solution of Partial Differential Equations

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Liesen, J.; Strakoš, Z.

    2014-01-01

    Roč. 449, 15 May (2014), s. 89-114 ISSN 0024-3795 R&D Projects: GA AV ČR IAA100300802; GA ČR GA201/09/0917 Grant - others:GA MŠk(CZ) LL1202; GA UK(CZ) 695612 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * adaptivity * a posteriori error analysis * discretization error * algebra ic error * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014

  10. Dynamic modeling method of the bolted joint with uneven distribution of joint surface pressure

    Science.gov (United States)

    Li, Shichao; Gao, Hongli; Liu, Qi; Liu, Bokai

    2018-03-01

    The dynamic characteristics of the bolted joints have a significant influence on the dynamic characteristics of the machine tool. Therefore, establishing a reasonable bolted joint dynamics model is helpful to improve the accuracy of machine tool dynamics model. Because the pressure distribution on the joint surface is uneven under the concentrated force of bolts, a dynamic modeling method based on the uneven pressure distribution of the joint surface is presented in this paper to improve the dynamic modeling accuracy of the machine tool. The analytic formulas between the normal, tangential stiffness per unit area and the surface pressure on the joint surface can be deduced based on the Hertz contact theory, and the pressure distribution on the joint surface can be obtained by the finite element software. Futhermore, the normal and tangential stiffness distribution on the joint surface can be obtained by the analytic formula and the pressure distribution on the joint surface, and assigning it into the finite element model of the joint. Qualitatively compared the theoretical mode shapes and the experimental mode shapes, as well as quantitatively compared the theoretical modal frequencies and the experimental modal frequencies. The comparison results show that the relative error between the first four-order theoretical modal frequencies and the first four-order experimental modal frequencies is 0.2% to 4.2%. Besides, the first four-order theoretical mode shapes and the first four-order experimental mode shapes are similar and one-to-one correspondence. Therefore, the validity of the theoretical model is verified. The dynamic modeling method proposed in this paper can provide a theoretical basis for the accurate dynamic modeling of the bolted joint in machine tools.

  11. Effect of dimensional error of metallic bipolar plate on the GDL pressure distribution in the PEM fuel cell

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Dong' an; Peng, Linfa; Lai, Xinmin [State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai 200240 (China)

    2009-01-15

    Recently, the metallic bipolar plate (BPP) has received considerable attention because of its advantageous electrical and mechanical properties. In this study, a methodology based on FEA model and Monte Carlo simulation is developed to investigate the effect of dimensional error of the metallic BPP on the pressure distribution of gas diffusion layer (GDL). At first, a parameterized FEA model of metallic BPP/GDL assembly is established, and heights of the channel and rib are considered to be randomly varying parameters of normal distribution due to the dimensional error. Then, GDL pressure distributions with different dimensional errors are obtained respectively based on the Monte Carlo simulation, and the desirability function method is employed to evaluate them. At last, a regression equation between the GDL pressure distribution and the dimensional error is modeled. With the regression equation, the allowed maximum dimensional error for the metallic BPP is calculated. The methodology in this study can be applied to guide the design and manufacturing of the metallic BPP. (author)

  12. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  13. distribution of refractive errors among school children in abia state of ...

    African Journals Online (AJOL)

    children have greater access to computers, television, better libraries and regular electricity supply which could also motivate more night reading may make an acceptable explanation for this. However, it is the recommendation of the researcher that the effects of environmental factors on the distribution of refractive errors be.

  14. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  15. Semantically Secure Symmetric Encryption with Error Correction for Distributed Storage

    Directory of Open Access Journals (Sweden)

    Juha Partala

    2017-01-01

    Full Text Available A distributed storage system (DSS is a fundamental building block in many distributed applications. It applies linear network coding to achieve an optimal tradeoff between storage and repair bandwidth when node failures occur. Additively homomorphic encryption is compatible with linear network coding. The homomorphic property ensures that a linear combination of ciphertext messages decrypts to the same linear combination of the corresponding plaintext messages. In this paper, we construct a linearly homomorphic symmetric encryption scheme that is designed for a DSS. Our proposal provides simultaneous encryption and error correction by applying linear error correcting codes. We show its IND-CPA security for a limited number of messages based on binary Goppa codes and the following assumption: when dividing a scrambled generator matrix G^ into two parts G1^ and G2^, it is infeasible to distinguish G2^ from random and to find a statistical connection between G1^ and G2^. Our infeasibility assumptions are closely related to those underlying the McEliece public key cryptosystem but are considerably weaker. We believe that the proposed problem has independent cryptographic interest.

  16. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Lili; Tian, Li; Wang, Desheng

    2008-10-31

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  17. Incorporating Skew into RMS Surface Roughness Probability Distribution

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  18. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  19. Goldmann tonometry tear film error and partial correction with a shaped applanation surface.

    Science.gov (United States)

    McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M

    2018-01-01

    The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p film adhesion error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p film adhesion error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.

  20. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    Science.gov (United States)

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  1. LAW DISTRIBUTION APPROXIMATION ON EIGENSTATE ERRORS OF ADS-B BASED ON CUMULANT ANALYSIS OF ADS-B-RAD SYSTEM DATA DISPARITY

    Directory of Open Access Journals (Sweden)

    2017-01-01

    Full Text Available The article deals with a new approximation method for enhanced accuracy measurement system errors distribu- tion. The method is based upon the mistie analysis of this system and a more robust design data. The method is considered on the example of comparison of Automatic Dependent Surveillance - Broadcast (ADS-B with ground radar warning sys- tem used at present. The peculiarity of the considered problem is that the target parameter (aircraft swerve value may dras- tically change in the scale of both measurement systems errors during observation. That is why it is impossible to determine the position of the aircraft by repeatedly observing it with ground radar warning system. It is only possible to compare the systems’ one-shot measurements, which are called errors here. The article considers that the distribution of robust meas- urement system errors probability density (the system that has been continuously in operation is known, the histogram of errors is given and it is needed to obtain an asymptotic estimate of errors occurrence distribution for a new improved meas- urement system.This approach is based on cumulant analysis of measurement systems error distribution functions. The approach allows us to carry out the reduction of corresponding infinite series properly. The author shows that due to measurement systems independency, their errors distribution cumulants are connected by a simple ratio, which allow to calculate the val- ues easily. To reconstruct distribution initial form one should use Edgeworth’s asymptotic series, where a normal distribu- tion derivative is used as a basis function. The latter is proportional to Hermitian polynomial, thus the series can be consid- ered as an orthogonal decomposition.The author reveals the results of coordinate error component distribution calculation; the error is measured when the normal line lies towards aircraft path, using error statistics experimental information obtained in ”RI of

  2. A study and simulation of the impact of high-order aberrations to overlay error distribution

    Science.gov (United States)

    Sun, G.; Wang, F.; Zhou, C.

    2011-03-01

    With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.

  3. Thermocouple Errors when Mounted on Cylindrical Surfaces in Abnormal Thermal Environments.

    Energy Technology Data Exchange (ETDEWEB)

    Nakos, James T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Suo-Anttila, Jill M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zepper, Ethan T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Koenig, Jerry J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Valdez, Vincent A. [ECI Inc., Albuquerque, NM (United States)

    2017-05-01

    Mineral-insulated, metal-sheathed, Type-K thermocouples are used to measure the temperature of various items in high-temperature environments, often exceeding 1000degC (1273 K). The thermocouple wires (chromel and alumel) are protected from the harsh environments by an Inconel sheath and magnesium oxide (MgO) insulation. The sheath and insulation are required for reliable measurements. Due to the sheath and MgO insulation, the temperature registered by the thermocouple is not the temperature of the surface of interest. In some cases, the error incurred is large enough to be of concern because these data are used for model validation, and thus the uncertainties of the data need to be well documented. This report documents the error using 0.062" and 0.040" diameter Inconel sheathed, Type-K thermocouples mounted on cylindrical surfaces (inside of a shroud, outside and inside of a mock test unit). After an initial transient, the thermocouple bias errors typically range only about +-1-2% of the reading in K. After all of the uncertainty sources have been included, the total uncertainty to 95% confidence, for shroud or test unit TCs in abnormal thermal environments, is about +-2% of the reading in K, lower than the +-3% typically used for flat shrouds. Recommendations are provided in Section 6 to facilitate interpretation and use of the results. .

  4. A generalized CAPM model with asymmetric power distributed errors with an application to portfolio construction

    NARCIS (Netherlands)

    Bao, T.; Diks, C.; Li, H.

    We estimate the CAPM model on European stock market data, allowing for asymmetric and fat-tailed return distributions using independent and identically asymmetric power distributed (IIAPD) innovations. The results indicate that the generalized CAPM with IIAPD errors has desirable properties. It is

  5. Towards better error statistics for atmospheric inversions of methane surface fluxes

    Directory of Open Access Journals (Sweden)

    A. Berchet

    2013-07-01

    Full Text Available We adapt general statistical methods to estimate the optimal error covariance matrices in a regional inversion system inferring methane surface emissions from atmospheric concentrations. Using a minimal set of physical hypotheses on the patterns of errors, we compute a guess of the error statistics that is optimal in regard to objective statistical criteria for the specific inversion system. With this very general approach applied to a real-data case, we recover sources of errors in the observations and in the prior state of the system that are consistent with expert knowledge while inferred from objective criteria and with affordable computation costs. By not assuming any specific error patterns, our results depict the variability and the inter-dependency of errors induced by complex factors such as the misrepresentation of the observations in the transport model or the inability of the model to reproduce well the situations of steep gradients of concentrations. Situations with probable significant biases (e.g., during the night when vertical mixing is ill-represented by the transport model can also be diagnosed by our methods in order to point at necessary improvement in a model. By additionally analysing the sensitivity of the inversion to each observation, guidelines to enhance data selection in regional inversions are also proposed. We applied our method to a recent significant accidental methane release from an offshore platform in the North Sea and found methane fluxes of the same magnitude than what was officially declared.

  6. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  7. Mitigation of defocusing by statics and near-surface velocity errors by interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal

    2015-08-19

    We propose an interferometric least-squares migration method that can significantly reduce migration artifacts due to statics and errors in the near-surface velocity model. We first choose a reference reflector whose topography is well known from the, e.g., well logs. Reflections from this reference layer are correlated with the traces associated with reflections from deeper interfaces to get crosscorrelograms. These crosscorrelograms are then migrated using interferometric least-squares migration (ILSM). In this way statics and velocity errors at the near surface are largely eliminated for the examples in our paper.

  8. Setup errors and effectiveness of Optical Laser 3D Surface imaging system (Sentinel) in postoperative radiotherapy of breast cancer.

    Science.gov (United States)

    Wei, Xiaobo; Liu, Mengjiao; Ding, Yun; Li, Qilin; Cheng, Changhai; Zong, Xian; Yin, Wenming; Chen, Jie; Gu, Wendong

    2018-05-08

    Breast-conserving surgery (BCS) plus postoperative radiotherapy has become the standard treatment for early-stage breast cancer. The aim of this study was to compare the setup accuracy of optical surface imaging by the Sentinel system with cone-beam computerized tomography (CBCT) imaging currently used in our clinic for patients received BCS. Two optical surface scans were acquired before and immediately after couch movement correction. The correlation between the setup errors as determined by the initial optical surface scan and CBCT was analyzed. The deviation of the second optical surface scan from the reference planning CT was considered an estimate for the residual errors for the new method for patient setup correction. The consequences in terms for necessary planning target volume (PTV) margins for treatment sessions without setup correction applied. We analyzed 145 scans in 27 patients treated for early stage breast cancer. The setup errors of skin marker based patient alignment by optical surface scan and CBCT were correlated, and the residual setup errors as determined by the optical surface scan after couch movement correction were reduced. Optical surface imaging provides a convenient method for improving the setup accuracy for breast cancer patient without unnecessary imaging dose.

  9. Estimation of errors due to inhomogeneous distribution of radionuclides in lungs

    International Nuclear Information System (INIS)

    Pelled, O.; German, U.; Pollak, G.; Alfassi, Z.B.

    2006-01-01

    The uncertainty in the activity determination of uranium contamination due to real inhomogeneous distribution and assumption of homogenous distribution can reach more than one order of magnitude when using one detector in a set of 4 detectors covering most of the whole lungs. Using the information from several detectors may improve the accuracy, as obtained by summing the responses from the 3 or 4 detectors. However, even with this improvement, the errors are still very large, up to almost a factor of 10 when the analysis is based on the 92 keV energy peak and up to 7 for the 185 keV peak

  10. Determination of corrosion rate of reinforcement with a modulated guard ring electrode; analysis of errors due to lateral current distribution

    International Nuclear Information System (INIS)

    Wojtas, H.

    2004-01-01

    The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate

  11. Model error assessment of burst capacity models for energy pipelines containing surface cracks

    International Nuclear Information System (INIS)

    Yan, Zijian; Zhang, Shenwei; Zhou, Wenxing

    2014-01-01

    This paper develops the probabilistic characteristics of the model errors associated with five well-known burst capacity models/methodologies for pipelines containing longitudinally-oriented external surface cracks, namely the Battelle and CorLAS™ models as well as the failure assessment diagram (FAD) methodologies recommended in the BS 7910 (2005), API RP579 (2007) and R6 (Rev 4, Amendment 10). A total of 112 full-scale burst test data for cracked pipes subjected internal pressure only were collected from the literature. The model error for a given burst capacity model is evaluated based on the ratios of the test to predicted burst pressures for the collected data. Analysis results suggest that the CorLAS™ model is the most accurate model among the five models considered and the Battelle, BS 7910, API RP579 and R6 models are in general conservative; furthermore, the API RP579 and R6 models are markedly more accurate than the Battelle and BS 7910 models. The results will facilitate the development of reliability-based structural integrity management of pipelines. - Highlights: • Model errors for five burst capacity models for pipelines containing surface cracks are characterized. • Basic statistics of the model errors are obtained based on test-to-predicted ratios. • Results will facilitate reliability-based design and assessment of energy pipelines

  12. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    Science.gov (United States)

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  13. Effect of temperature on surface error and laser damage threshold for self-healing BK7 glass.

    Science.gov (United States)

    Wang, Chu; Wang, Hongxiang; Shen, Lu; Hou, Jing; Xu, Qiao; Wang, Jian; Chen, Xianhua; Liu, Zhichao

    2018-03-20

    Cracks caused during the lapping and polishing process can decrease the laser-induced damage threshold (LIDT) of the BK7 glass optical elements, which would shorten the lifetime and limit the output power of the high-energy laser system. When BK7 glass is heated under appropriate conditions, the surface cracks can exhibit a self-healing phenomenon. In this paper, based on thermodynamics and viscous fluid mechanics theory, the mechanisms of crack self-healing are explained. The heat-healing experiment was carried out, and the effect of water was analyzed. The multi-spatial-frequency analysis was used to investigate the effect of temperature on surface error for self-healing BK7 glass, and the lapped BK7 glass specimens before and after heat healing were detected by an interferometer and atomic force microscopy. The low-spatial-frequency error was analyzed by peak to valley and root mean square, the mid-spatial-frequency error was analyzed by power spectral density, and the high-spatial-frequency error was analyzed by surface roughness. The results showed that the optimal heating temperature for BK7 was 450°C, and when the heating temperature was higher than the glass transition temperature (555°C), the surface quality decreased a lot. The laser damage test was performed, and the specimen heated at 450°C showed an improvement in LIDT.

  14. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    Directory of Open Access Journals (Sweden)

    Ulrik W Nash

    Full Text Available Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem.

  15. Goldmann tonometry tear film error and partial correction with a shaped applanation surface

    Directory of Open Access Journals (Sweden)

    McCafferty SJ

    2018-01-01

    Full Text Available Sean J McCafferty,1–4 Eniko T Enikov,5 Jim Schwiegerling,2,3 Sean M Ashley1,3 1Intuor Technologies, 2Department of Ophthalmology, University of Arizona College of Medicine, 3University of Arizona College of Optical Science, 4Arizona Eye Consultants, 5Department of Mechanical and Aerospace, University of Arizona College of Engineering, Tucson, AZ, USA Purpose: The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT prism and in a correcting applanation tonometry surface (CATS prism.Methods: The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms.Results: The CATS prism tear film adhesion error (2.74±0.21 mmHg was significantly less than the GAT prism (4.57±0.18 mmHg, p<0.001. Tear film adhesion error was independent of applanation mire thickness (R2=0.09, p=0.04. Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p<0.001. Cadaver eye validation indicated the CATS prism’s tear film adhesion error (1.40±0.51 mmHg was significantly less than that of the GAT prism (3.30±0.38 mmHg; p=0.002.Conclusion: Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error by ~41%. Fluorescein solution increases the tear film adhesion compared to

  16. Modelling Distribution Function of Surface Ozone Concentration for Selected Suburban Areas in Malaysia

    International Nuclear Information System (INIS)

    Muhammad Izwan Zariq Mokhtar; Nurul Adyani Ghazali; Muhammad Yazid Nasir; Norhazlina Suhaimi

    2016-01-01

    Ozone is known as an important secondary pollutant in the atmosphere. The aim of this study is to find the best fit distribution for calculating exceedance and return period of ozone based on suburban areas; Perak (AMS1) and Pulau Pinang (AMS2). Three distributions namely Gamma, Rayleigh and Laplace were used to fit 2 years ozone data (2010 and 2011). The parameters were estimated by using Maximum Likelihood Estimation (MLE) in order to plot probability distribution function (PDF) and cumulative distribution function (CDF). Four performance indicators were used to find the best distribution namely, normalized absolute error (NAE), prediction accuracy (PA), coefficient of determination (R 2 ) and root mean square error (RMSE). The best distribution to represent ozone concentration at both sites in 2010 and 2011 is Gamma distribution with the smallest error measure (NAE and RMSE) and the highest adequacy measure (PA and R 2 ). For the 2010 data, AMS1 was predicted to exceed 0.1 ppm for 2 days in 2011 with a return period of one occurrence. (author)

  17. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    DEFF Research Database (Denmark)

    Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas

    2012-01-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....

  18. Distributed error and alarm processing in the CMS data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2012-01-01

    The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.

  19. On equilibrium charge distribution above dielectric surface

    Directory of Open Access Journals (Sweden)

    Yu.V. Slyusarenko

    2009-01-01

    Full Text Available The problem of the equilibrium state of the charged many-particle system above dielectric surface is formulated. We consider the case of the presence of the external attractive pressing field and the case of its absence. The equilibrium distributions of charges and the electric field, which is generated by these charges in the system in the case of ideally plane dielectric surface, are obtained. The solution of electrostatic equations of the system under consideration in case of small spatial heterogeneities caused by the dielectric surface, is also obtained. These spatial inhomogeneities can be caused both by the inhomogeneities of the surface and by the inhomogeneous charge distribution upon it. In particular, the case of the "wavy" spatially periodic surface is considered taking into account the possible presence of the surface charges.

  20. Bayesian linear regression with skew-symmetric error distributions with applications to survival analysis

    KAUST Repository

    Rubio, Francisco J.

    2016-02-09

    We study Bayesian linear regression models with skew-symmetric scale mixtures of normal error distributions. These kinds of models can be used to capture departures from the usual assumption of normality of the errors in terms of heavy tails and asymmetry. We propose a general noninformative prior structure for these regression models and show that the corresponding posterior distribution is proper under mild conditions. We extend these propriety results to cases where the response variables are censored. The latter scenario is of interest in the context of accelerated failure time models, which are relevant in survival analysis. We present a simulation study that demonstrates good frequentist properties of the posterior credible intervals associated with the proposed priors. This study also sheds some light on the trade-off between increased model flexibility and the risk of over-fitting. We illustrate the performance of the proposed models with real data. Although we focus on models with univariate response variables, we also present some extensions to the multivariate case in the Supporting Information.

  1. Error characterization methods for surface soil moisture products from remote sensing

    International Nuclear Information System (INIS)

    Doubková, M.

    2012-01-01

    To support the operational use of Synthetic Aperture Radar (SAR) earth observation systems, the European Space Agency (ESA) is developing Sentinel-1 radar satellites operating in C-band. Much like its SAR predecessors (Earth Resource Satellite, ENVISAT, and RADARSAT), the Sentinel-1 will operate at a medium spatial resolution (ranging from 5 to 40 m), but with a greatly improved revisit period, especially over Europe (∼2 days). Given the planned high temporal sampling and the operational configuration Sentinel-1 is expected to be beneficial for operational monitoring of dynamic processes in hydrology and phenology. The benefit of a C-band SAR monitoring service in hydrology has already been demonstrated within the scope of the Soil Moisture for Hydrometeorologic Applications (SHARE) project using data from the Global Mode (GM) of the Advanced Synthetic Aperture Radar (ASAR). To fully exploit the potential of the SAR soil moisture products, well characterized error needs to be provided with the products. Understanding errors of remotely sensed surface soil moisture (SSM) datasets was indispensible for their application in models, for extractions of blended SSM products, as well as for their usage in evaluation of other soil moisture datasets. This thesis has several objectives. First, it provides the basics and state of the art methods for evaluating measures of SSM, including both the standard (e.g. Root Mean Square Error, Correlation coefficient) and the advanced (e.g. Error propagation, Triple collocation) evaluation measures. A summary of applications of soil moisture datasets is presented and evaluation measures are suggested for each application according to its requirement on the dataset quality. The evaluation of the Advanced Synthetic Aperture Radar (ASAR) Global Mode (GM) SSM using the standard and advanced evaluation measures comprises a second objective of the work. To achieve the second objective, the data from the Australian Water Assessment System

  2. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  3. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  4. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  5. DINEOF reconstruction of clouded images including error maps – application to the Sea-Surface Temperature around Corsican Island

    Directory of Open Access Journals (Sweden)

    J.-M. Beckers

    2006-01-01

    Full Text Available We present an extension to the Data INterpolating Empirical Orthogonal Functions (DINEOF technique which allows not only to fill in clouded images but also to provide an estimation of the error covariance of the reconstruction. This additional information is obtained by an analogy with optimal interpolation. It is shown that the error fields can be obtained with a clever rearrangement of calculations at a cost comparable to that of the interpolation itself. The method is presented on the reconstruction of sea-surface temperature in the Ligurian Sea and around the Corsican Island (Mediterranean Sea, including the calculation of inter-annual variability of average surface values and their expected errors. The application shows that the error fields are not only able to reflect the data-coverage structure but also the covariances of the physical fields.

  6. Distributed generation incorporated with the thermal generation for optimum operation of a smart grid considering forecast error

    International Nuclear Information System (INIS)

    Howlader, Harun Or Rashid; Matayoshi, Hidehito; Senjyu, Tomonobu

    2015-01-01

    Highlights: • Optimal operation of the thermal generation for the smart grid system. • Different distributed generations are considered as the power generation sources. • Forecast error of the renewable energy systems is considered. • Controllable loads of the smart houses are considered to achieve the optimal operation. • Economical benefits can be achieved for the smart grid system. - Abstract: This paper concentrates on the optimal operation of the conventional thermal generators with distributed generations for a smart grid considering forecast error. The distributed generations are considered as wind generators, photovoltaic generators, battery energy storage systems in the supply side and a large number of smart houses in the demand side. A smart house consists of the electric vehicle, heat pump, photovoltaic generator and solar collector. The electric vehicle and heat pump are considered as the controllable loads which can compensate the power for the forecast error of renewable energy sources. As a result, power generation cost of the smart grid can reduce through coordinated with distributed generations and thermal units scheduling process. The electric vehicles of the smart house are considered as the spinning reserve in the scheduling process which lead to lessen the additional operation of thermal units. Finally, obtained results of the proposed system have been compared with the conventional method. The conventional method does not consider the electric vehicle in the smart houses. The acquired results demonstrate that total power generation cost of the smart grid has been reduced by the proposed method considering forecast error. Effectiveness of the proposed method has been verified by the extensive simulation results using MATLAB® software

  7. Shape estimation of the buried body from the ground surface potential distributions generated by current injection; Tsuryu ni yoru chihyomen den`i bunpu wo riyoshita maizobutsu keijo no suitei

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Y; Okamoto, Y [Chiba Institute of Technology, Chiba (Japan); Noguchi, K [Waseda University, Tokyo (Japan); Teramachi, Y [University of Industrial Technology, Kanagawa (Japan); Akabane, H; Agu, M [Ibaraki University, Ibaraki (Japan)

    1996-10-01

    Ground surface potential distribution generated by current injection was studied to estimate the shape of buried bodies. Since the uniform ground system including a homogeneous buried body is perfectly determined with the surface shape of a buried body and resistivities in/around a buried body, inversion is easy if the surface shape is described with some parameters. N electrodes are arranged in 2-D grid manner on the ground, and two electrodes among them are used for current injection, while the others for measurement of potentials. M times of measurements are repeated while changing combination of electrodes for current injection. The potential distribution measured by the mth electrode pair is represented by N-2 dimensional vectors. The square error between this distribution and calculated one is the function of k parameters on the surface shape and resistivities on a buried body. Both shape and resistivities can be estimated by solving an optimum value problem using the square error as evaluation function. Analysis is easy for a spherical body with 6 unknown parameters, however, it is difficult for more complex bodies than elliptical one or more than two bodies. 5 refs., 9 figs.

  8. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  9. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  10. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  11. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can...... positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support...

  12. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  13. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  14. The high-level error bound for shifted surface spline interpolation

    OpenAIRE

    Luh, Lin-Tian

    2006-01-01

    Radial function interpolation of scattered data is a frequently used method for multivariate data fitting. One of the most frequently used radial functions is called shifted surface spline, introduced by Dyn, Levin and Rippa in \\cite{Dy1} for $R^{2}$. Then it's extended to $R^{n}$ for $n\\geq 1$. Many articles have studied its properties, as can be seen in \\cite{Bu,Du,Dy2,Po,Ri,Yo1,Yo2,Yo3,Yo4}. When dealing with this function, the most commonly used error bounds are the one raised by Wu and S...

  15. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    on basis of reliability profiles for bridges without human errors are extended to include bridges with human errors. The first rehabilitation distributions for bridges without and with human errors are combined into a joint first rehabilitation distribution. The methodology presented is illustrated...... for reinforced concrete bridges....

  16. Effect of surface slope errors of the ellipsoidal mirror on the resolution of the PGM beamline at Indus-1

    International Nuclear Information System (INIS)

    Singh, M.R.; Mukund, R.; Sahni, V.C.

    1999-01-01

    The influence of geometrical shape errors and surface errors on the characteristics and performance of grazing incidence optics used in the design of beamlines at synchrotron radiation facilities is considered. The methodology adopted for the simulation of slope errors is described and results presented for the ellipsoidal focussing mirror used in the design of PGM beamline at Indus-1. (author)

  17. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  18. Stress errors in a case of developmental surface dyslexia in Filipino.

    Science.gov (United States)

    Dulay, Katrina May; Hanley, J Richard

    2015-01-01

    This paper reports the case of a dyslexic boy (L.A.) whose impaired reading of Filipino is consistent with developmental surface dyslexia. Filipino has a transparent alphabetic orthography with stress typically falling on the penultimate syllable of multisyllabic words. However, exceptions to the typical stress pattern are not marked in the Filipino orthography. L.A. read words with typical stress patterns as accurately as controls, but made many more stress errors than controls when reading Filipino words with atypical stress. He regularized the pronunciation of many of these words by incorrectly placing the stress on the penultimate syllable. Since he also read nonwords as accurately and quickly as controls and performed well on tests of phonological awareness, L.A. appears to present a clear case of developmental surface dyslexia in a transparent orthography.

  19. Relay-aided free-space optical communications using α - μ distribution over atmospheric turbulence channels with misalignment errors

    Science.gov (United States)

    Upadhya, Abhijeet; Dwivedi, Vivek K.; Singh, G.

    2018-06-01

    In this paper, we have analyzed the performance of dual hop radio frequency (RF)/free-space optical (FSO) fixed gain relay environment confined by atmospheric turbulence induced fading channel over FSO link and modeled using α - μ distribution. The RF hop of the amplify-and-forward scheme undergoes the Rayleigh fading and the proposed system model also considers the pointing error effect on the FSO link. A novel and accurate mathematical expression of the probability density function for a FSO link experiencing α - μ distributed atmospheric turbulence in the presence of pointing error is derived. Further, we have presented analytical expressions of outage probability and bit error rate in terms of Meijer-G function. In addition to this, a useful and mathematically tractable closed-form expression for the end-to-end ergodic capacity of the dual hop scheme in terms of bivariate Fox's H function is derived. The atmospheric turbulence, misalignment errors and various binary modulation schemes for intensity modulation on optical wireless link are considered to yield the results. Finally, we have analyzed each of the three performance metrics for high SNR in order to represent them in terms of elementary functions and the achieved analytical results are supported by computer-based simulations.

  20. Quantifying Uncertainty in Satellite-Retrieved Land Surface Temperature from Cloud Detection Errors

    Directory of Open Access Journals (Sweden)

    Claire E. Bulgin

    2018-04-01

    Full Text Available Clouds remain one of the largest sources of uncertainty in remote sensing of surface temperature in the infrared, but this uncertainty has not generally been quantified. We present a new approach to do so, applied here to the Advanced Along-Track Scanning Radiometer (AATSR. We use an ensemble of cloud masks based on independent methodologies to investigate the magnitude of cloud detection uncertainties in area-average Land Surface Temperature (LST retrieval. We find that at a grid resolution of 625 km 2 (commensurate with a 0.25 ∘ grid size at the tropics, cloud detection uncertainties are positively correlated with cloud-cover fraction in the cell and are larger during the day than at night. Daytime cloud detection uncertainties range between 2.5 K for clear-sky fractions of 10–20% and 1.03 K for clear-sky fractions of 90–100%. Corresponding night-time uncertainties are 1.6 K and 0.38 K, respectively. Cloud detection uncertainty shows a weaker positive correlation with the number of biomes present within a grid cell, used as a measure of heterogeneity in the background against which the cloud detection must operate (e.g., surface temperature, emissivity and reflectance. Uncertainty due to cloud detection errors is strongly dependent on the dominant land cover classification. We find cloud detection uncertainties of a magnitude of 1.95 K over permanent snow and ice, 1.2 K over open forest, 0.9–1 K over bare soils and 0.09 K over mosaic cropland, for a standardised clear-sky fraction of 74.2%. As the uncertainties arising from cloud detection errors are of a significant magnitude for many surface types and spatially heterogeneous where land classification varies rapidly, LST data producers are encouraged to quantify cloud-related uncertainties in gridded products.

  1. Convergent surface water distributions in U.S. cities

    Science.gov (United States)

    M.K. Steele; J.B. Heffernan; N. Bettez; J. Cavender-Bares; P.M. Groffman; J.M. Grove; S. Hall; S.E. Hobbie; K. Larson; J.L. Morse; C. Neill; K.C. Nelson; J. O' Neil-Dunne; L. Ogden; D.E. Pataki; C. Polsky; R. Roy Chowdhury

    2014-01-01

    Earth's surface is rapidly urbanizing, resulting in dramatic changes in the abundance, distribution and character of surface water features in urban landscapes. However, the scope and consequences of surface water redistribution at broad spatial scales are not well understood. We hypothesized that urbanization would lead to convergent surface water abundance and...

  2. Estimation of sampling error uncertainties in observed surface air temperature change in China

    Science.gov (United States)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  3. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  4. A measurement strategy and an error-compensation model for the on-machine laser measurement of large-scale free-form surfaces

    International Nuclear Information System (INIS)

    Li, Bin; Li, Feng; Liu, Hongqi; Cai, Hui; Mao, Xinyong; Peng, Fangyu

    2014-01-01

    This study presents a novel measurement strategy and an error-compensation model for the measurement of large-scale free-form surfaces in on-machine laser measurement systems. To improve the measurement accuracy, the effects of the scan depth, surface roughness, incident angle and azimuth angle on the measurement results were investigated experimentally, and a practical measurement strategy considering the position and orientation of the sensor is presented. Also, a semi-quantitative model based on geometrical optics is proposed to compensate for the measurement error associated with the incident angle. The normal vector of the measurement point is determined using a cross-curve method from the acquired surface data. Then, the azimuth angle and incident angle are calculated to inform the measurement strategy and error-compensation model, respectively. The measurement strategy and error-compensation model are verified through the measurement of a large propeller blade on a heavy machine tool in a factory environment. The results demonstrate that the strategy and the model are effective in increasing the measurement accuracy. (paper)

  5. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  6. Error sources in the retrieval of aerosol information over bright surfaces from satellite measurements in the oxygen A band

    Science.gov (United States)

    Nanda, Swadhin; de Graaf, Martin; Sneep, Maarten; de Haan, Johan F.; Stammes, Piet; Sanders, Abram F. J.; Tuinder, Olaf; Pepijn Veefkind, J.; Levelt, Pieternel F.

    2018-01-01

    Retrieving aerosol optical thickness and aerosol layer height over a bright surface from measured top-of-atmosphere reflectance spectrum in the oxygen A band is known to be challenging, often resulting in large errors. In certain atmospheric conditions and viewing geometries, a loss of sensitivity to aerosol optical thickness has been reported in the literature. This loss of sensitivity has been attributed to a phenomenon known as critical surface albedo regime, which is a range of surface albedos for which the top-of-atmosphere reflectance has minimal sensitivity to aerosol optical thickness. This paper extends the concept of critical surface albedo for aerosol layer height retrievals in the oxygen A band, and discusses its implications. The underlying physics are introduced by analysing the top-of-atmosphere reflectance spectrum as a sum of atmospheric path contribution and surface contribution, obtained using a radiative transfer model. Furthermore, error analysis of an aerosol layer height retrieval algorithm is conducted over dark and bright surfaces to show the dependence on surface reflectance. The analysis shows that the derivative with respect to aerosol layer height of the atmospheric path contribution to the top-of-atmosphere reflectance is opposite in sign to that of the surface contribution - an increase in surface brightness results in a decrease in information content. In the case of aerosol optical thickness, these derivatives are anti-correlated, leading to large retrieval errors in high surface albedo regimes. The consequence of this anti-correlation is demonstrated with measured spectra in the oxygen A band from the GOME-2 instrument on board the Metop-A satellite over the 2010 Russian wildfires incident.

  7. Fixturing error measurement and analysis using CMMs

    International Nuclear Information System (INIS)

    Wang, Y; Chen, X; Gindy, N

    2005-01-01

    Influence of fixture on the errors of a machined surface can be very significant. The machined surface errors generated during machining can be measured by using a coordinate measurement machine (CMM) through the displacements of three coordinate systems on a fixture-workpiece pair in relation to the deviation of the machined surface. The surface errors consist of the component movement, component twist, deviation between actual machined surface and defined tool path. A turbine blade fixture for grinding operation is used for case study

  8. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  9. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    Directory of Open Access Journals (Sweden)

    Na Wei

    2016-05-01

    Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  10. Investigating Surface Bias Errors in the Weather Research and Forecasting (WRF) Model using a Geographic Information System (GIS)

    Science.gov (United States)

    2015-02-01

    Computational and Information Sciences Directorate Battlefield Environment Division (ATTN: RDRL- CIE -M) White Sands Missile Range, NM 88002-5501 8. PERFORMING...meteorological parameters, which became our focus. We found that elevation accounts for a significant portion of the variance in the model error. The...found that elevation accounts for a significant portion of the variance in the model error of surface temperature and relative humidity predictions

  11. Impact of catheter reconstruction error on dose distribution in high dose rate intracavitary brachytherapy and evaluation of OAR doses

    International Nuclear Information System (INIS)

    Thaper, Deepak; Shukla, Arvind; Rathore, Narendra; Oinam, Arun S.

    2016-01-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this study is to evaluate the impact of catheter reconstruction error on dose distribution in CT based intracavitary brachytherapy planning and evaluation of its effect on organ at risk (OAR) like bladder, rectum and sigmoid and target volume High risk clinical target volume (HR-CTV)

  12. Adaptive finite element analysis of incompressible viscous flow using posteriori error estimation and control of node density distribution

    International Nuclear Information System (INIS)

    Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi

    1995-01-01

    The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)

  13. Geometry, charge distribution, and surface speciation of phosphate on goethite.

    NARCIS (Netherlands)

    Rahnemaie, R.; Hiemstra, T.; Riemsdijk, van W.H.

    2007-01-01

    The surface speciation of phosphate has been evaluated with surface complexation modeling using an interfacial charge distribution (CD) approach based on ion adsorption and ordering of interfacial water. In the CD model, the charge of adsorbed ions is distributed over two electrostatic potentials in

  14. Diffraction analysis of sidelobe characteristics of optical elements with ripple error

    Science.gov (United States)

    Zhao, Lei; Luo, Yupeng; Bai, Jian; Zhou, Xiangdong; Du, Juan; Liu, Qun; Luo, Yujie

    2018-03-01

    The ripple errors of the lens lead to optical damage in high energy laser system. The analysis of sidelobe on the focal plane, caused by ripple error, provides a reference to evaluate the error and the imaging quality. In this paper, we analyze the diffraction characteristics of sidelobe of optical elements with ripple errors. First, we analyze the characteristics of ripple error and build relationship between ripple error and sidelobe. The sidelobe results from the diffraction of ripple errors. The ripple error tends to be periodic due to fabrication method on the optical surface. The simulated experiments are carried out based on angular spectrum method by characterizing ripple error as rotationally symmetric periodic structures. The influence of two major parameter of ripple including spatial frequency and peak-to-valley value to sidelobe is discussed. The results indicate that spatial frequency and peak-to-valley value both impact sidelobe at the image plane. The peak-tovalley value is the major factor to affect the energy proportion of the sidelobe. The spatial frequency is the major factor to affect the distribution of the sidelobe at the image plane.

  15. Distribution of {sup 129}I in terrestrial surface water environments

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xuegao [State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Hohai University, Nanjing 210098 (China); College of Hydrology and Water Resources, Hohai University, Nanjing (China); Gong, Meng [College of Hydrology and Water Resources, Hohai University, Nanjing (China); Yi, Peng, E-mail: pengyi1915@163.com [State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Hohai University, Nanjing 210098 (China); College of Hydrology and Water Resources, Hohai University, Nanjing (China); Aldahan, Ala [Department of Earth Sciences, Uppsala University, Uppsala (Sweden); Department of Geology, United Arab Emirates University, Al Ain (United Arab Emirates); Yu, Zhongbo [State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Hohai University, Nanjing 210098 (China); College of Hydrology and Water Resources, Hohai University, Nanjing (China); Possnert, Göran [Tandem Laboratory, Uppsala University, Uppsala (Sweden); Chen, Li [State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Hohai University, Nanjing 210098 (China); College of Hydrology and Water Resources, Hohai University, Nanjing (China)

    2015-10-15

    The global distribution of the radioactive isotope iodine-129 in surface waters (lakes and rivers) is presented here and compared with the atmospheric deposition and distribution in surface marine waters. The results indicate relatively high concentrations in surface water systems in close vicinity of the anthropogenic release sources as well as in parts of Western Europe, North America and Central Asia. {sup 129}I level is generally higher in the terrestrial surface water of the Northern hemisphere compared to the southern hemisphere. The highest values of {sup 129}I appear around 50°N and 40°S in the northern and southern hemisphere, separately. Direct gaseous and marine atmospheric emissions are the most likely avenues for the transport of {sup 129}I from the sources to the terrestrial surface waters. To apply iodine-129 as process tracer in terrestrial surface water environment, more data are needed on {sup 129}I distribution patterns both locally and globally.

  16. Performance Analysis of Multi-Hop Heterodyne FSO Systems over Malaga Turbulent Channels with Pointing Error Using Mixture Gamma Distribution

    KAUST Repository

    Alheadary, Wael Ghazy

    2017-11-16

    This work investigates the end-to-end performance of a free space optical amplify-and-forward relaying system using heterodyne detection over Malaga turbulence channels at the presence of pointing error. In order to overcome the analytical difficulties of the proposed composite channel model, we employed the mixture Gamma (MG) distribution. The proposed model shows a high accurate and tractable approximation just by adjusting some parameters. More specifically, we derived new closed-form expression for average bit error rate employing rectangular quadrature amplitude modulation in term of MG distribution and generalized power series of the Meijer\\'s G- function. The closed-form has been validated numerically and asymptotically at high signal to noise ratio.

  17. Effect of phase coupling on surface amplitude distribution of wind waves

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.

    Nonlinear features of wind generated surface waves are considered here to be caused by nonrandomness (non-Uniform) in the phase spectrum. Nonrandomness in recorded waves, if present, would be generally obscured within the error level of observations...

  18. X-ray evaluation of residual stress distributions within surface machined layer generated by surface machining and sequential welding

    International Nuclear Information System (INIS)

    Taniguchi, Yuu; Okano, Shigetaka; Mochizuki, Masahito

    2017-01-01

    The excessive tensile residual stress generated by welding after surface machining may be an important factor to cause stress corrosion cracking (SCC) in nuclear power plants. Therefore we need to understand and control the residual stress distribution appropriately. In this study, residual stress distributions within surface machined layer generated by surface machining and sequential welding were evaluated by X-ray diffraction method. Depth directional distributions were also investigated by electrolytic polishing. In addition, to consider the effect of work hardened layer on the residual stress distributions, we also measured full width at half maximum (FWHM) obtained from X-ray diffraction. Testing material was a low-carbon austenitic stainless steel type SUS316L. Test specimens were prepared by surface machining with different cutting conditions. Then, bead-on-plate welding under the same welding condition was carried out on the test specimens with different surface machined layer. As a result, the tensile residual stress generated by surface machining increased with increasing cutting speed and showed nearly uniform distributions on the surface. Furthermore, the tensile residual stress drastically decreased with increasing measurement depth within surface machined layer. Then, the residual stress approached 0 MPa after the compressive value showed. FWHM also decreased drastically with increasing measurement depth and almost constant value from a certain depth, which was almost equal regardless of the machining condition, within surface machined layer in all specimens. After welding, the transverse distribution of the longitudinal residual stress varied in the area apart from the weld center according to machining conditions and had a maximum value in heat affected zone. The magnitude of the maximum residual stress was almost equal regardless of the machining condition and decreased with increasing measurement depth within surface machined layer. Finally, the

  19. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  20. Temperature distribution and heat radiation of patterned surfaces at short wavelengths

    Science.gov (United States)

    Emig, Thorsten

    2017-05-01

    We analyze the equilibrium spatial distribution of surface temperatures of patterned surfaces. The surface is exposed to a constant external heat flux and has a fixed internal temperature that is coupled to the outside heat fluxes by finite heat conductivity across the surface. It is assumed that the temperatures are sufficiently high so that the thermal wavelength (a few microns at room temperature) is short compared to all geometric length scales of the surface patterns. Hence the radiosity method can be employed. A recursive multiple scattering method is developed that enables rapid convergence to equilibrium temperatures. While the temperature distributions show distinct dependence on the detailed surface shapes (cuboids and cylinder are studied), we demonstrate robust universal relations between the mean and the standard deviation of the temperature distributions and quantities that characterize overall geometric features of the surface shape.

  1. Errors due to non-uniform distribution of fat in dual X-ray absorptiometry of the lumbar spine

    International Nuclear Information System (INIS)

    Tothill, P.; Pye, D.W.

    1992-01-01

    Errors in spinal dual X-ray absorptiometry (DXA) were studied by analysing X-ray CT scans taken for diagnostic purposes on 20 patients representing a wide range of fat content. The mean difference between the fat thickness over the vertebral bodies and that over a background area in antero-posterior (AP) scanning was 6.7 ± 8.1 mm for men and 13.4 ± 4.7 mm for women. For AP scanning a non-uniform fat distribution leads to a mean overestimate of 0.029 g/cm 2 for men and 0.057 g/cm 2 for women. The error exceeded 0.1 g/cm 2 in 10% of slices. For lateral scanning the error exceeded 0.1 g/cm 2 (about 15% of normal) in a quarter of slices. (author)

  2. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  3. Angular distribution of sputtered atoms from Al-Sn alloy and surface topography

    International Nuclear Information System (INIS)

    Wang Zhenxia; Pan Jisheng; Zhang Jiping; Tao Zhenlan

    1992-01-01

    If an alloy is sputtered the angular distribution of the sputtered atoms can be different for each component. At high ion energies in the range of linear cascade theory, different energy distributions for components of different mass in the solid are predicted. Upon leaving the surface, i.e. overcoming the surface binding energy, these differences should show up in different angular distributions. Differences in the angular distribution are of much practical interest, for example, in thin-film deposition by sputtering and surface analysis by secondary-ion mass spectroscopy and Auger electron spectroscopy. Recently our experimental work has shown that for Fe-W alloy the surface microtopography becomes dominant and determines the shape of the angular distribution of the component. However, with the few experimental results available so far it is too early to draw any general conclusions for the angular distribution of the sputtered constituents. Thus, the aim of this work was to study further the influence of the surface topography on the shape of the angular distribution of sputtered atoms from an Al-Sn alloy. (Author)

  4. Does semantic impairment explain surface dyslexia? VLSM evidence for a double dissociation between regularization errors in reading and semantic errors in picture naming

    Directory of Open Access Journals (Sweden)

    Sara Pillay

    2014-04-01

    Full Text Available The correlation between semantic deficits and exception word regularization errors ("surface dyslexia" in semantic dementia has been taken as strong evidence for involvement of semantic codes in exception word pronunciation. Rare cases with semantic deficits but no exception word reading deficit have been explained as due to individual differences in reading strategy, but this account is hotly debated. Semantic dementia is a diffuse process that always includes semantic impairment, making lesion localization difficult and independent assessment of semantic deficits and reading errors impossible. We addressed this problem using voxel-based lesion symptom mapping in 38 patients with left hemisphere stroke. Patients were all right-handed, native English speakers and at least 6 months from stroke onset. Patients performed an oral reading task that included 80 exception words (words with inconsistent orthographic-phonologic correspondence, e.g., pint, plaid, glove. Regularization errors were defined as plausible but incorrect pronunciations based on application of spelling-sound correspondence rules (e.g., 'plaid' pronounced as "played". Two additional tests examined explicit semantic knowledge and retrieval. The first measured semantic substitution errors during naming of 80 standard line drawings of objects. This error type is generally presumed to arise at the level of concept selection. The second test (semantic matching required patients to match a printed sample word (e.g., bus with one of two alternative choice words (e.g., car, taxi on the basis of greater similarity of meaning. Lesions were labeled on high-resolution T1 MRI volumes using a semi-automated segmentation method, followed by diffeomorphic registration to a template. VLSM used an ANCOVA approach to remove variance due to age, education, and total lesion volume. Regularization errors during reading were correlated with damage in the posterior half of the middle temporal gyrus and

  5. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  6. Empirical study of the GARCH model with rational errors

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2013-01-01

    We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions

  7. The effect of TWD estimation error on the geometry of machined surfaces in micro-EDM milling

    DEFF Research Database (Denmark)

    Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard

    In micro EDM (electrical discharge machining) milling, tool electrode wear must be effectively compensated in order to achieve high accuracy of machined features [1]. Tool wear compensation in micro-EDM milling can be based on off-line techniques with limited accuracy such as estimation...... and statistical characterization of the discharge population [3]. The TWD based approach permits the direct control of the position of the tool electrode front surface. However, TWD estimation errors will generate a self-amplifying error on the tool electrode axial depth during micro-EDM milling. Therefore....... The error propagation effect is demonstrated through a software simulation tool developed by the authors for determination of the correct TWD for subsequent use in compensation of electrode wear in EDM milling. The implemented model uses an initial arbitrary estimation of TWD and a single experiment...

  8. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  9. Evaluation of the Surface Water Distribution in North-Central Namibia Based on MODIS and AMSR Series

    Directory of Open Access Journals (Sweden)

    Hiroki Mizuochi

    2014-08-01

    Full Text Available Semi-arid North-central Namibia has high potential for rice cultivation because large seasonal wetlands (oshana form during the rainy season. Evaluating the distribution of surface water would reveal the area potentially suitable for rice cultivation. In this study, we detected the distribution of surface water with high spatial and temporal resolution by using two types of complementary satellite data: MODIS (MODerate-resolution Imaging Spectroradiometer and AMSR-E (Advanced Microwave Scanning Radiometer–Earth Observing System, using AMSR2 after AMSR-E became unavailable. We combined the modified normalized-difference water index (MNDWI from the MODIS data with the normalized-difference polarization index (NDPI from the AMSR-E and AMSR2 data to determine the area of surface water. We developed a simple gap-filling method (“database unmixing” with the two indices, thereby providing daily 500-m-resolution MNDWI maps of north-central Namibia regardless of whether the sky was clear. Moreover, through receiver-operator characteristics (ROC analysis, we determined the threshold MNDWI (−0.316 for wetlands. Using ROC analysis, MNDWI had moderate performance (the area under the ROC curve was 0.747, and the recognition error for seasonal wetlands and dry land was 21.2%. The threshold MNDWI let us calculate probability of water presence (PWP maps for the rainy season and the whole year. The PWP maps revealed the total area potentially suitable for rice cultivation: 1255 km2 (1.6% of the study area.

  10. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  11. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    Science.gov (United States)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  13. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  14. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model.

    Directory of Open Access Journals (Sweden)

    Kyung-Min Lee

    Full Text Available The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D reconstruction with cone-beam computed tomography (CBCT scan.Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left and 2 vertical rotations (upward/downward. Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion.Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05. Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05.Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement.

  15. Influence of material surface on the scanning error of a powder-free 3D measuring system.

    Science.gov (United States)

    Kurz, Michael; Attin, Thomas; Mehl, Albert

    2015-11-01

    This study aims to evaluate the accuracy of a powder-free three-dimensional (3D) measuring system (CEREC Omnicam, Sirona), when scanning the surface of a material at different angles. Additionally, the influence of water was investigated. Nine different materials were combined with human tooth surface (enamel) to create n = 27 specimens. These materials were: Controls (InCoris TZI and Cerec Guide Bloc), ceramics (Vitablocs® Mark II and IPS Empress CAD), metals (gold and amalgam) and composites (Tetric Ceram, Filtek Supreme A2B and A2E). The highly polished samples were scanned at different angles with and without water. The 216 scans were then analyzed and descriptive statistics were obtained. The height difference between the tooth and material surfaces, as measured with the 3D scans, ranged from 0.83 μm (±2.58 μm) to -14.79 μm (±3.45 μm), while the scan noise on the materials was between 3.23 μm (±0.79 μm) and 14.24 μm (±6.79 μm) without considering the control groups. Depending on the thickness of the water film, measurement errors in the order of 300-1,600 μm could be observed. The inaccuracies between the tooth and material surfaces, as well as the scan noise for the materials, were within the range of error for measurements used for conventional impressions and are therefore negligible. The presence of water, however, greatly affects the scan. The tested powder-free 3D measuring system can safely be used to scan different material surfaces without the prior application of a powder, although drying of the surface prior to scanning is highly advisable.

  16. Understanding reliance on automation: effects of error type, error distribution, age and experience

    Science.gov (United States)

    Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka

    2015-01-01

    An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142

  17. Assessing Variability and Errors in Historical Runoff Forecasting with Physical Models and Alternative Data Sources

    Science.gov (United States)

    Penn, C. A.; Clow, D. W.; Sexstone, G. A.

    2017-12-01

    Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a

  18. Ion distributions at charged aqueous surfaces: Synchrotron X-ray scattering studies

    Energy Technology Data Exchange (ETDEWEB)

    Bu, Wei [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Surface sensitive synchrotron X-ray scattering studies were performed to obtain the distribution of monovalent ions next to a highly charged interface at room temperature. To control surface charge density, lipids, dihexadecyl hydrogen-phosphate (DHDP) and dimysteroyl phosphatidic acid (DMPA), were spread as monolayer materials at the air/water interface, containing CsI at various concentrations. Five decades in bulk concentrations (CsI) are investigated, demonstrating that the interfacial distribution is strongly dependent on bulk concentration. We show that this is due to the strong binding constant of hydronium H3O+ to the phosphate group, leading to proton-transfer back to the phosphate group and to a reduced surface charge. Using anomalous reflectivity off and at the L3 Cs+ resonance, we provide spatial counterion (Cs+) distributions next to the negatively charged interfaces. The experimental ion distributions are in excellent agreement with a renormalized surface charge Poisson-Boltzmann theory for monovalent ions without fitting parameters or additional assumptions. Energy Scans at four fixed momentum transfers under specular reflectivity conditions near the Cs+ L3 resonance were conducted on 10-3 M CsI with DHDP monolayer materials on the surface. The energy scans exhibit a periodic dependence on photon momentum transfer. The ion distributions obtained from the analysis are in excellent agreement with those obtained from anomalous reflectivity measurements, providing further confirmation to the validity of the renormalized surface charge Poisson-Boltzmann theory for monovalent ions. Moreover, the dispersion corrections f0 and f00 for Cs+ around L3 resonance, revealing the local environment of a Cs+ ion in the solution at the interface, were extracted simultaneously with output of ion distributions.

  19. Surface roughness considerations for atmospheric correction of ocean color sensors. I - The Rayleigh-scattering component. II - Error in the retrieved water-leaving radiance

    Science.gov (United States)

    Gordon, Howard R.; Wang, Menghua

    1992-01-01

    The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  20. Inverse estimation for temperatures of outer surface and geometry of inner surface of furnace with two layer walls

    International Nuclear Information System (INIS)

    Chen, C.-K.; Su, C.-R.

    2008-01-01

    This study provides an inverse analysis to estimate the boundary thermal behavior of a furnace with two layer walls. The unknown temperature distribution of the outer surface and the geometry of the inner surface were estimated from the temperatures of a small number of measured points within the furnace wall. The present approach rearranged the matrix forms of the governing differential equations and then combined the reversed matrix method, the linear least squares error method and the concept of virtual area to determine the unknown boundary conditions of the furnace system. The dimensionless temperature data obtained from the direct problem were used to simulate the temperature measurements. The influence of temperature measurement errors upon the precision of the estimated results was also investigated. The advantage of this approach is that the unknown condition can be directly solved by only one calculation process without initially guessed temperatures, and the iteration process of the traditional method can be avoided in the analysis of the heat transfer. Therefore, the calculation in this work is more rapid and exact than the traditional method. The result showed that the estimation error of the geometry increased with increasing distance between measured points and inner surface and in preset error, and with decreasing number of measured points. However, the geometry of the furnace inner surface could be successfully estimated by only the temperatures of a small number of measured points within and near the outer surface under reasonable preset error

  1. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  2. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  3. A Torque Error Compensation Algorithm for Surface Mounted Permanent Magnet Synchronous Machines with Respect to Magnet Temperature Variations

    Directory of Open Access Journals (Sweden)

    Chang-Seok Park

    2017-09-01

    Full Text Available This paper presents a torque error compensation algorithm for a surface mounted permanent magnet synchronous machine (SPMSM through real time permanent magnet (PM flux linkage estimation at various temperature conditions from medium to rated speed. As known, the PM flux linkage in SPMSMs varies with the thermal conditions. Since a maximum torque per ampere look up table, a control method used for copper loss minimization, is developed based on estimated PM flux linkage, variation of PM flux linkage results in undesired torque development of SPMSM drives. In this paper, PM flux linkage is estimated through a stator flux linkage observer and the torque error is compensated in real time using the estimated PM flux linkage. In this paper, the proposed torque error compensation algorithm is verified in simulation and experiment.

  4. MALDI-MS Imaging Analysis of Fungicide Residue Distributions on Wheat Leaf Surfaces.

    Science.gov (United States)

    Annangudi, Suresh P; Myung, Kyung; Avila Adame, Cruz; Gilbert, Jeffrey R

    2015-05-05

    Improved retention and distribution of agrochemicals on plant surfaces is an important attribute in the biological activity of pesticide. Although retention of agrochemicals on plants after spray application can be quantified using traditional analytical techniques including LC or GC, the spatial distribution of agrochemicals on the plants surfaces has received little attention. Matrix assisted laser desorption/ionization (MALDI) imaging technology has been widely used to determine the distribution of proteins, peptides and metabolites in different tissue sections, but its application to environmental research has been limited. Herein, we probed the potential utility of MALDI imaging in characterizing the distribution of three commercial fungicides on wheat leaf surfaces. Using this MALDI imaging method, we were able to detect 500 ng of epoxiconazole, azoxystrobin, and pyraclostrobin applied in 1 μL drop on the leaf surfaces using MALDI-MS. Subsequent dilutions of pyraclostrobin revealed that the compound can be chemically imaged on the leaf surfaces at levels as low as 60 ng of total applied in the area of 1 μL droplet. After application of epoxiconazole, azoxystrobin, and pyraclostrobin at a field rate of 100 gai/ha in 200 L water using a track sprayer system, residues of these fungicides on the leaf surfaces were sufficiently visualized. These results suggest that MALDI imaging can be used to monitor spatial distribution of agrochemicals on leaf samples after pesticide application.

  5. Effect of assembly error of bipolar plate on the contact pressure distribution and stress failure of membrane electrode assembly in proton exchange membrane fuel cell

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Dong' an; Peng, Linfa; Lai, Xinmin [State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2010-07-01

    In practice, the assembly error of the bipolar plate (BPP) in a PEM fuel cell stack is unavoidable based on the current assembly process. However its effect on the performance of the PEM fuel cell stack is not reported yet. In this study, a methodology based on FEA model, ''least squares-support vector machine (LS-SVM)'' simulation and statistical analysis is developed to investigate the effect of the assembly error of the BPP on the pressure distribution and stress failure of membrane electrode assembly (MEA). At first, a parameterized FEA model of a metallic BPP/MEA assembly is established. Then, the LS-SVM simulation process is conducted based on the FEA model, and datasets for the pressure distribution and Von Mises stress of MEA are obtained, respectively for each assembly error. At last, the effect of the assembly error is obtained by applying the statistical analysis to the LS-SVM results. A regression equation between the stress failure and the assembly error is also built, and the allowed maximum assembly error is calculated based on the equation. The methodology in this study is beneficial to understand the mechanism of the assembly error and can be applied to guide the assembly process for the PEM fuel cell stack. (author)

  6. Progressive retry for software error recovery in distributed systems

    Science.gov (United States)

    Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.

    1993-01-01

    In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.

  7. The preliminary study of setup errors' impact on dose distribution of image guide radiation therapy for head and neck cancer

    International Nuclear Information System (INIS)

    Xu Luying; Pan Jianji; Wang Xiaoliang; Bai Penggang; Li Qixin; Fei Zhaodong; Chen Chuanben; Ma Liqin; Tang Tianlan

    2011-01-01

    Objective: To measure the set-up errors of patients with head and neck (H and N) cancer during the image guided intensity-modulated radiotherapy (IMRT) treatment and analyze the impact of setup errors on dose distribution; then to further investigate the necessity of adjustment online for H and N cancer during IMRT treatment. Methods: Cone-beam CT (CBCT) scanning of thirty patients with H and N cancer were acquired by once weekly with a total of 6 times during IMRT treatment. The CBCT images and the original planning CT images were matched by the bony structure and worked out the translational errors of the x, y, z axis, as well as rotational errors. The dose distributions were recalculated based on the data of each setup error. The dose of planning target volume (PTV) and organs at risk were calculated in the re-planning, and than compared with the original plan by paired t-test. Results: The mean value of x, y, z axis translational set-up errors were (1.06 ± 0.95)mm, (0.95 ± 0.77)mm and (1.31 ± 1.07)mm, respectively. The rotational error of x, y, z axis were (1.04 ±0.791), (1.06 ±0.89) and (0.81 ±0.61 ), respectively. PTV 95% volume dose (D 95 ) and PTV minimal dose of re-planning for 6 times set-up were lower than original plan (6526.6 cGy : 6630.3 cGy, t =3.98, P =0.000 and 5632.6 cGy : 5792.5 cGy, t =- 2.89, P =0.007). Brain stem received 45 Gydose volume (V 45 ) and 1% brain stem volume dose (D 01 )were higher than original plan (3.54% : 2.75%, t =3.84, P =0.001 and 5129.7 cGy : 4919.3 cGy, t =4.36, P =0.000). Conclusions: The set-up errors led to the dose of PTV D 95 obviously insufficient and significantly increased V 45 , D 01 of the brainstem. So, adjustment online is necessary for H and N cancer during IMRT treatment. (authors)

  8. Cloud Masking and Surface Temperature Distribution in the Polar Regions Using AVHRR and other Satellite Data

    Science.gov (United States)

    Comiso, Joey C.

    1995-01-01

    Surface temperature is one of the key variables associated with weather and climate. Accurate measurements of surface air temperatures are routinely made in meteorological stations around the world. Also, satellite data have been used to produce synoptic global temperature distributions. However, not much attention has been paid on temperature distributions in the polar regions. In the polar regions, the number of stations is very sparse. Because of adverse weather conditions and general inaccessibility, surface field measurements are also limited. Furthermore, accurate retrievals from satellite data in the region have been difficult to make because of persistent cloudiness and ambiguities in the discrimination of clouds from snow or ice. Surface temperature observations are required in the polar regions for air-sea-ice interaction studies, especially in the calculation of heat, salinity, and humidity fluxes. They are also useful in identifying areas of melt or meltponding within the sea ice pack and the ice sheets and in the calculation of emissivities of these surfaces. Moreover, the polar regions are unique in that they are the sites of temperature extremes, the location of which is difficult to identify without a global monitoring system. Furthermore, the regions may provide an early signal to a potential climate change because such signal is expected to be amplified in the region due to feedback effects. In cloud free areas, the thermal channels from infrared systems provide surface temperatures at relatively good accuracies. Previous capabilities include the use of the Temperature Humidity Infrared Radiometer (THIR) onboard the Nimbus-7 satellite which was launched in 1978. Current capabilities include the use of the Advance Very High Resolution Radiometer (AVHRR) aboard NOAA satellites. Together, these two systems cover a span of 16 years of thermal infrared data. Techniques for retrieving surface temperatures with these sensors in the polar regions have

  9. The Transit-Time Distribution from the Northern Hemisphere Midlatitude Surface

    Science.gov (United States)

    Orbe, Clara; Waugh, Darryn W.; Newman, Paul A.; Strahan, Susan; Steenrod, Stephen

    2015-01-01

    The distribution of transit times from the Northern Hemisphere (NH) midlatitude surface is a fundamental property of tropospheric transport. Here we present an analysis of the transit time distribution (TTD) since air last contacted the northern midlatitude surface layer, as simulated by the NASA Global Modeling Initiative Chemistry Transport Model. We find that throughout the troposphere the TTD is characterized by long flat tails that reflect the recirculation of old air from the Southern Hemisphere and results in mean ages that are significantly larger than the modal age. Key aspects of the TTD -- its mode, mean and spectral width -- are interpreted in terms of tropospheric dynamics, including seasonal shifts in the location and strength of tropical convection and variations in quasi-isentropic transport out of the northern midlatitude surface layer. Our results indicate that current diagnostics of tropospheric transport are insufficient for comparing model transport and that the full distribution of transit times is a more appropriate constraint.

  10. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  11. Impact of dose-distribution uncertainties on rectal ntcp modeling I: Uncertainty estimates

    International Nuclear Information System (INIS)

    Fenwick, John D.; Nahum, Alan E.

    2001-01-01

    A trial of nonescalated conformal versus conventional radiotherapy treatment of prostate cancer has been carried out at the Royal Marsden NHS Trust (RMH) and Institute of Cancer Research (ICR), demonstrating a significant reduction in the rate of rectal bleeding reported for patients treated using the conformal technique. The relationship between planned rectal dose-distributions and incidences of bleeding has been analyzed, showing that the rate of bleeding falls significantly as the extent of the rectal wall receiving a planned dose-level of more than 57 Gy is reduced. Dose-distributions delivered to the rectal wall over the course of radiotherapy treatment inevitably differ from planned distributions, due to sources of uncertainty such as patient setup error, rectal wall movement and variation in the absolute rectal wall surface area. In this paper estimates of the differences between planned and treated rectal dose-distribution parameters are obtained for the RMH/ICR nonescalated conformal technique, working from a distribution of setup errors observed during the RMH/ICR trial, movement data supplied by Lebesque and colleagues derived from repeat CT scans, and estimates of rectal circumference variations extracted from the literature. Setup errors and wall movement are found to cause only limited systematic differences between mean treated and planned rectal dose-distribution parameter values, but introduce considerable uncertainties into the treated values of some dose-distribution parameters: setup errors lead to 22% and 9% relative uncertainties in the highly dosed fraction of the rectal wall and the wall average dose, respectively, with wall movement leading to 21% and 9% relative uncertainties. Estimates obtained from the literature of the uncertainty in the absolute surface area of the distensible rectal wall are of the order of 13%-18%. In a subsequent paper the impact of these uncertainties on analyses of the relationship between incidences of bleeding

  12. Forecast errors in dust vertical distributions over Rome (Italy): Multiple particle size representation and cloud contributions

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Shtivelman, A.; Krichak, S. O.; Joseph, J. H.; Kallos, G.; Katsafados, P.; Spyrou, C.; Gobbi, G. P.; Barnaba, F.; Nickovic, S.; PéRez, C.; Baldasano, J. M.

    2007-08-01

    In this study, forecast errors in dust vertical distributions were analyzed. This was carried out by using quantitative comparisons between dust vertical profiles retrieved from lidar measurements over Rome, Italy, performed from 2001 to 2003, and those predicted by models. Three models were used: the four-particle-size Dust Regional Atmospheric Model (DREAM), the older one-particle-size version of the SKIRON model from the University of Athens (UOA), and the pre-2006 one-particle-size Tel Aviv University (TAU) model. SKIRON and DREAM are initialized on a daily basis using the dust concentration from the previous forecast cycle, while the TAU model initialization is based on the Total Ozone Mapping Spectrometer aerosol index (TOMS AI). The quantitative comparison shows that (1) the use of four-particle-size bins in the dust modeling instead of only one-particle-size bins improves dust forecasts; (2) cloud presence could contribute to noticeable dust forecast errors in SKIRON and DREAM; and (3) as far as the TAU model is concerned, its forecast errors were mainly caused by technical problems with TOMS measurements from the Earth Probe satellite. As a result, dust forecast errors in the TAU model could be significant even under cloudless conditions. The DREAM versus lidar quantitative comparisons at different altitudes show that the model predictions are more accurate in the middle part of dust layers than in the top and bottom parts of dust layers.

  13. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  14. Surface modification and particles size distribution control in nano-CdS/polystyrene composite film

    International Nuclear Information System (INIS)

    Min Zhirong; Ming Qiuzhang; Hai Chunliang; Han Minzeng

    2003-01-01

    Preparation of nano-CdS particles with surface thiol modification by microemulsion method and their influences on the particle size distribution in highly filled polystyrene-based composites were studied. The modified nano-CdS was characterized by X-ray photoelectron spectroscopy (XPS), light absorption and emission measurements to reveal the morphologies of the surface modifier, which are consistent with the surface molecules packing calculation. The morphologies of the surface modifier exerted a great influence not only on the optical performance of the particles themselves, but also on the size distribution of the particle in polystyrene matrix. A monolayer coverage with tightly packed thiol molecules was believed to be most effective in promoting a uniform particle size distribution and eliminating the surface defects that cause radiationless recombination. Control of the particles size distribution in polystyrene can be attained by adjusting surface coverage status of the thiol molecules based on the strong interaction between the surface modifier and the matrix

  15. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  16. Study on temperature measurement of gas turbine blade based on analysis of error caused by the reflected radiation and emission angle

    Science.gov (United States)

    Li, Dong; Feng, Chi; Gao, Shan; Chen, Liwei; Daniel, Ketui

    2018-06-01

    Accurate measurement of gas turbine blade temperature is of great significance as far as blade health monitoring is concerned. An important method for measuring this temperature is the use of a radiation pyrometer. In this research, error of the pyrometer caused by reflected radiation from the surfaces surrounding the target and the emission angle of the target was analyzed. Important parameters for this analysis were the view factor between interacting surfaces, spectral directional emissivity, pyrometer operating wavelength and the surface temperature distribution on the blades and the vanes. The interacting surface of the rotor blade and the vane models used were discretized using triangular surface elements from which contour integral was used to calculate the view factor between the surface elements. Spectral directional emissivities were obtained from an experimental setup of Ni based alloy samples. A pyrometer operating wavelength of 1.6 μm was chosen. Computational fluid dynamics software was used to simulate the temperature distribution of the rotor blade and the guide vane based on the actual gas turbine input parameters. Results obtained in this analysis show that temperature error introduced by reflected radiation and emission angle ranges from  ‑23 K to 49 K.

  17. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  18. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  19. Sensitivity of Distributions of Climate System Properties to Surface Temperature Datasets

    Science.gov (United States)

    Libardoni, A. G.; Forest, C. E.

    2011-12-01

    Predictions of climate change from models depend strongly on the representation of climate system properties emerging from the processes and feedbacks in the models. The quality of any model prediction can be evaluated by determining how well its output reproduces the observed climate system. With this evaluation, the reliability of climate projections derived from the model and provided for policy makers is assessed and quantified. In this study, surface temperature, upper-air temperature, and ocean heat content data are used to constrain the distributions of the parameters that define three climate system properties in the MIT Integrated Global Systems Model: climate sensitivity, the rate of ocean heat uptake into the deep ocean, and net anthropogenic aerosol forcing. In particular, we explore the sensitivity of the distributions to the surface temperature dataset used to estimate the likelihood of model output given the observed climate records. In total, five different reconstructions of past surface temperatures are used and the resulting parameter distribution functions differ from each other. Differences in estimates of climate sensitivity mode and mean are as great as 1 K between the datasets, with an overall range of 1.2 to 5.3 K using the 5-95 confidence intervals. Ocean effective diffusivity is poorly constrained regardless of which dataset is used. All distributions show broad distributions and only three show signs of a distribution mode. When a mode is present, they tend to be for low diffusivity values. Distributions for the net aerosol forcing show similar shapes and cluster into two groups that are shifted by approximately 0.1 watts per square meter. However, the overall spread of forcing values from the 5-95 confidence interval, -0.19 to -0.83 watts per square meter, is small compared to other uncertainties in climate forcings. Transient climate response estimates derived from these distributions range between 0.87 and 2.41 K. Similar to the

  20. INFLUENCE OF RESIDENCE-TIME DISTRIBUTION ON A SURFACE-RENEWAL MODEL OF CONSTANT-PRESSURE CROSS-FLOW MICROFILTRATION

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2015-03-01

    Full Text Available Abstract This work examines the influence of the residence-time distribution (RTD of surface elements on a model of cross-flow microfiltration that has been proposed recently (Hasan et al., 2013. Along with the RTD from the previous work (Case 1, two other RTD functions (Cases 2 and 3 are used to develop theoretical expressions for the permeate-flux decline and cake buildup in the filter as a function of process time. The three different RTDs correspond to three different startup conditions of the filtration process. The analytical expressions for the permeate flux, each of which contains three basic parameters (membrane resistance, specific cake resistance and rate of surface renewal, are fitted to experimental permeate flow rate data in the microfiltration of fermentation broths in laboratory- and pilot-scale units. All three expressions for the permeate flux fit the experimental data fairly well with average root-mean-square errors of 4.6% for Cases 1 and 2, and 4.2% for Case 3, respectively, which points towards the constructive nature of the model - a common feature of theoretical models used in science and engineering.

  1. Two-dimensional potential and charge distributions of positive surface streamer

    International Nuclear Information System (INIS)

    Tanaka, Daiki; Matsuoka, Shigeyasu; Kumada, Akiko; Hidaka, Kunihiko

    2009-01-01

    Information on the potential and the field profile along a surface discharge is required for quantitatively discussing and clarifying the propagation mechanism. The sensing technique with a Pockels crystal has been developed for directly measuring the potential and electric field distribution on a dielectric material. In this paper, the Pockels sensing system consists of a pulse laser and a CCD camera for measuring the instantaneous two-dimensional potential distribution on a 25.4 mm square area with a 50 μm sampling pitch. The temporal resolution is 3.2 ns which is determined by the pulse width of the laser emission. The transient change in the potential distribution of a positive surface streamer propagating in atmospheric air is measured with this system. The electric field and the charge distributions are also calculated from the measured potential profile. The propagating direction component of the electric field near the tip of the propagating streamer reaches 3 kV mm -1 . When the streamer stops, the potential distribution along a streamer forms an almost linear profile with the distance from the electrode, and its gradient is about 0.5 kV mm -1 .

  2. Distribution of 137Cs in the Surface Soil of Serpong Nuclear Site

    OpenAIRE

    Lubis, E

    2011-01-01

    The distribution of 137Cs in the surface soil layer of Serpong Nuclear Site (SNS) was investigated by field sampling. The Objectives of the investigation is finding the profile of 137Cs distribution in the surface soil and the Tf value that can be used for estimation of radiation dose from livestock product-man pathways. The results indicates that the 137Cs activity in surface soil of SNS is 0.80 ± 0,29 Bq/kg, much lower than in the Antarctic. The contribution value of 137Cs from the operatio...

  3. Spatial distribution of errors associated with multistatic meteor radar

    Science.gov (United States)

    Hocking, W. K.

    2018-06-01

    With the recent increase in numbers of small and versatile low-power meteor radars, the opportunity exists to benefit from simultaneous application of multiple systems spaced by only a few hundred km and less. Transmissions from one site can be recorded at adjacent receiving sites using various degrees of forward scatter, potentially allowing atmospheric conditions in the mesopause regions between stations to be diagnosed. This can allow a better spatial overview of the atmospheric conditions at any time. Such studies have been carried out using a small version of such so-called multistatic meteor radars, e.g. Chau et al. (Radio Sci 52:811-828, 2017, https://doi.org/10.1002/2016rs006225 ). These authors were able to also make measurements of vorticity and divergence. However, measurement uncertainties arise which need to be considered in any application of such techniques. Some errors are so severe that they prohibit useful application of the technique in certain locations, particularly for zones at the midpoints of the radars sites. In this paper, software is developed to allow these errors to be determined, and examples of typical errors involved are discussed. The software should be of value to others who wish to optimize their own MMR systems.

  4. Characterization of electrical conductivity of carbon fiber reinforced plastic using surface potential distribution

    Science.gov (United States)

    Kikunaga, Kazuya; Terasaki, Nao

    2018-04-01

    A new method of evaluating electrical conductivity in a structural material such as carbon fiber reinforced plastic (CFRP) using surface potential is proposed. After the CFRP was charged by corona discharge, the surface potential distribution was measured by scanning a vibrating linear array sensor along the object surface with a high spatial resolution over a short duration. A correlation between the weave pattern of the CFRP and the surface potential distribution was observed. This result indicates that it is possible to evaluate the electrical conductivity of a material comprising conducting and insulating regions.

  5. Prediction of residual stress distributions due to surface machining and welding and crack growth simulation under residual stress distribution

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Katsuyama, JInya; Onizawa, Kunio; Hashimoto, Tadafumi; Mikami, Yoshiki; Mochizuki, Masahito

    2011-01-01

    Research highlights: → Residual stress distributions due to welding and machining are evaluated by XRD and FEM. → Residual stress due to machining shows higher tensile stress than welding near the surface. → Crack growth analysis is performed using calculated residual stress. → Crack growth result is affected machining rather than welding. → Machining is an important factor for crack growth. - Abstract: In nuclear power plants, stress corrosion cracking (SCC) has been observed near the weld zone of the core shroud and primary loop recirculation (PLR) pipes made of low-carbon austenitic stainless steel Type 316L. The joining process of pipes usually includes surface machining and welding. Both processes induce residual stresses, and residual stresses are thus important factors in the occurrence and propagation of SCC. In this study, the finite element method (FEM) was used to estimate residual stress distributions generated by butt welding and surface machining. The thermoelastic-plastic analysis was performed for the welding simulation, and the thermo-mechanical coupled analysis based on the Johnson-Cook material model was performed for the surface machining simulation. In addition, a crack growth analysis based on the stress intensity factor (SIF) calculation was performed using the calculated residual stress distributions that are generated by welding and surface machining. The surface machining analysis showed that tensile residual stress due to surface machining only exists approximately 0.2 mm from the machined surface, and the surface residual stress increases with cutting speed. The crack growth analysis showed that the crack depth is affected by both surface machining and welding, and the crack length is more affected by surface machining than by welding.

  6. Sulcal set optimization for cortical surface registration.

    Science.gov (United States)

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  7. Inverse radiation problem of temperature distribution in one-dimensional isotropically scattering participating slab with variable refractive index

    International Nuclear Information System (INIS)

    Namjoo, A.; Sarvari, S.M. Hosseini; Behzadmehr, A.; Mansouri, S.H.

    2009-01-01

    In this paper, an inverse analysis is performed for estimation of source term distribution from the measured exit radiation intensities at the boundary surfaces in a one-dimensional absorbing, emitting and isotropically scattering medium between two parallel plates with variable refractive index. The variation of refractive index is assumed to be linear. The radiative transfer equation is solved by the constant quadrature discrete ordinate method. The inverse problem is formulated as an optimization problem for minimizing an objective function which is expressed as the sum of square deviations between measured and estimated exit radiation intensities at boundary surfaces. The conjugate gradient method is used to solve the inverse problem through an iterative procedure. The effects of various variables on source estimation are investigated such as type of source function, errors in the measured data and system parameters, gradient of refractive index across the medium, optical thickness, single scattering albedo and boundary emissivities. The results show that in the case of noisy input data, variation of system parameters may affect the inverse solution, especially at high error values in the measured data. The error in measured data plays more important role than the error in radiative system parameters except the refractive index distribution; however the accuracy of source estimation is very sensitive toward error in refractive index distribution. Therefore, refractive index distribution and measured exit intensities should be measured accurately with a limited error bound, in order to have an accurate estimation of source term in a graded index medium.

  8. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  9. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua, E-mail: huli@radonc.wustl.edu [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  10. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    International Nuclear Information System (INIS)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua; Anastasio, Mark A.; Low, Daniel A.

    2015-01-01

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  11. Numerical Calculation of Distribution of Induced Carge Density on Planar Confined Surfaces

    International Nuclear Information System (INIS)

    Bolotov, V.; Druzhchenko, R.; Karazin, V.; Lominadze, J.; Kharadze, F.

    2007-01-01

    The calculation method of distribution of induced charge density on planar surfaces, including fractal structures of Sierpinski carpet type, is propesed. The calculation scheme is based on the fact that simply connected conducting surface of arbitrary geometry is an equipotential surface. (author)

  12. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    Science.gov (United States)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  13. Errors in dual-energy X-ray scanning of the hip because of nonuniform fat distribution.

    Science.gov (United States)

    Tothill, Peter; Weir, Nicholas; Loveland, John

    2014-01-01

    The variable proportion of fat in overlying soft tissue is a potential source of error in dual-energy X-ray absorptiometry (DXA) measurements of bone mineral. The effect on spine scanning has previously been assessed from cadaver studies and from computed tomography (CT) scans of soft tissue distribution. We have now applied the latter technique to DXA hip scanning. The CT scans performed for clinical purposes were used to derive mean adipose tissue thicknesses over bone and background areas for total hip and femoral neck. The former was always lower. More importantly, the fat thickness differences varied among subjects. Errors because of bone marrow fat were deduced from CT measurements of marrow thickness and assumed fat proportions of marrow. The effect of these differences on measured bone mineral density was deduced from phantom measurements of the bone equivalence of fat. Uncertainties of around 0.06g/cm(2) are similar to those previously reported for spine scanning and the results from cadaver measurements. They should be considered in assessing the diagnostic accuracy of DXA scanning. Copyright © 2014 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  14. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  15. Single-mode surface plasmon distributed feedback lasers.

    Science.gov (United States)

    Karami Keshmarzi, Elham; Tait, R Niall; Berini, Pierre

    2018-03-29

    Single-mode surface plasmon distributed feedback (DFB) lasers are realized in the near infrared using a two-dimensional non-uniform long-range surface plasmon polariton structure. The surface plasmon mode is excited onto a 20 nm-thick, 1 μm-wide metal stripe (Ag or Au) on a silica substrate, where the stripe is stepped in width periodically, forming a 1st order Bragg grating. Optical gain is provided by optically pumping a 450 nm-thick IR-140 doped PMMA layer as the top cladding, which covers the entire length of the Bragg grating, thus creating a DFB laser. Single-mode lasing peaks of very narrow linewidth were observed for Ag and Au DFBs near 882 nm at room temperature. The narrow linewidths are explained by the low spontaneous emission rate into the surface plasmon lasing mode as well as the high quality factor of the DFB structure. The lasing emission is exclusively TM polarized. Kinks in light-light curves accompanied by spectrum narrowing were observed, from which threshold pump power densities can be clearly identified (0.78 MW cm-2 and 1.04 MW cm-2 for Ag and Au DFB lasers, respectively). The Schawlow-Townes linewidth for our Ag and Au DFB lasers is estimated and very narrow linewidths are predicted for the lasers. The lasers are suitable as inexpensive, recyclable and highly coherent sources of surface plasmons, or for integration with other surface plasmon elements of similar structure.

  16. Distribution of technetium-99 in surface soils

    International Nuclear Information System (INIS)

    Tagami, Keiko; Uchida, Shigeo

    2000-01-01

    Technetium-99 ( 99 Tc) is an important fission product which has been widely distributed in the environment as a result of fallout from nuclear weapons testing. In order to improve our understanding of the behavior of 99 Tc in the environment, it is essential that we obtain more reliable information on the levels, distribution and fate of 99 Tc in the environment. In this study, the concentration of global fallout 99 Tc, in several surface soil samples (0 - 20 cm) collected in Japan, were determined by ICP-MS (inductively coupled plasma mass spectroscopy). The range of 99 Tc in rice paddy field, upland field and other soils determined in this study were 0.006 - 0.11, 0.004 - 0.008 and 0.007 - 0.02 Bq kg -1 dry, respectively. 137 Cs was used as a comparative indicator for the source of 99 Tc, because the fission yields from 235 U and 239 Pu were about the same (ca. 6%) for the two isotopes, and the behavior and distribution of 137 Cs in the environment is reasonably well understood. The 137 Cs contents in rice paddy field, upland field and other soils range between 1.7 - 28, 1.4 - 9.2 and -1 dry, respectively. The activity ratios of 99 Tc/ 137 Cs in all soil samples were (0.6 - 5.9) x 10 -3 . Most of the measured ratios were one order of magnitude higher than the theoretical one obtained from fission. However, this ratio in soil, presumably depends on not only both the characteristic of radionuclides and the soil, but also on their contents after deposition to the earth's surface. (author)

  17. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  18. Emmetropisation and the aetiology of refractive errors

    Science.gov (United States)

    Flitcroft, D I

    2014-01-01

    The distribution of human refractive errors displays features that are not commonly seen in other biological variables. Compared with the more typical Gaussian distribution, adult refraction within a population typically has a negative skew and increased kurtosis (ie is leptokurtotic). This distribution arises from two apparently conflicting tendencies, first, the existence of a mechanism to control eye growth during infancy so as to bring refraction towards emmetropia/low hyperopia (ie emmetropisation) and second, the tendency of many human populations to develop myopia during later childhood and into adulthood. The distribution of refraction therefore changes significantly with age. Analysis of the processes involved in shaping refractive development allows for the creation of a life course model of refractive development. Monte Carlo simulations based on such a model can recreate the variation of refractive distributions seen from birth to adulthood and the impact of increasing myopia prevalence on refractive error distributions in Asia. PMID:24406411

  19. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  20. Slide-position errors degrade machined optical component quality

    International Nuclear Information System (INIS)

    Arnold, J.B.; Steger, P.J.; Burleson, R.R.

    1975-01-01

    An ultraprecision lathe is being developed at the Oak Ridge Y-12 Plant to fabricate optical components for use in high-energy laser systems. The lathe has the capability to produce virtually any shape mirror which is symmetrical about an axis of revolution. Two basic types of mirrors are fabricated on the lathe, namely: (1) mirrors which are machined using a single slide motion (such as flats and cylinders), and (2) mirrors which are produced by two-coordinated slide motions (such as hyperbolic reflectors; large, true-radius reflectors, and other contoured-surface reflectors). The surface-finish quality of typical mirrors machined by a single axis of motion is better than 13 nm, peak to valley, which is an order of magnitude better than the surface finishes of mirrors produced by two axes of motion. Surface finish refers to short-wavelength-figure errors that are visibly detectable. The primary cause of the inability to produce significantly better surface finishes on contoured mirrors has been determined as positional errors which exist in the slide positioning systems. The correction of these errors must be accomplished before contoured surface finishes comparable to the flat and cylinder can be machined on the lathe

  1. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  2. High-resolution pattern of mangrove species distribution is controlled by surface elevation

    Science.gov (United States)

    Leong, Rick C.; Friess, Daniel A.; Crase, Beth; Lee, Wei Kit; Webb, Edward L.

    2018-03-01

    Mangrove vegetation species respond to multiple environmental gradients, and an enhanced understanding of how mangrove species are distributed across these gradients will facilitate conservation and management. Many environmental gradients correlate with tidal inundation; however small-scale inundation patterns resulting from microtopographical changes are difficult to capture empirically. In contrast, surface elevation is often a suitable, measurable and cost-effective proxy for inundation. This study investigated the relationships between species distribution and surface elevation in a mangrove forest in northwest Singapore. Through high-resolution land surveying, we developed a digital elevation model (DEM) and conducted a comprehensive survey of 4380 trees with a stem diameter ≥ 5 cm. A total of 15 species were encountered, and elevation envelopes were generated for 12. Species envelopes were distributed along an elevation continuum, with most species overlapping within the continuum. Spatial autocorrelation (SAC) was present for nine of the 15 species, and when taken into account, species ordering was modified across the elevation continuum. The presence of SAC strongly reinforces the need for research to control for SAC: classical spatial description of mangrove species distribution should be revised to account for ecological factors. This study suggests that (1) surface elevation applies strong controls on species distribution and (2) most mangroves at our study site have similar physiological tolerances.

  3. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    Science.gov (United States)

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  5. Errors in estimating neutron quality factor using lineal energy distributions measured in tissue-equivalent proportional counters

    International Nuclear Information System (INIS)

    Borak, T.B.; Stinchcomb, T.G.

    1982-01-01

    Neutron dose equivalent is obtained from quality factors which are defined in terms of LET. It is possible to estimate the dose averaged quality factor, antiQ, directly from distributions in lineal energy, y, that are measured in tissue-equivalent proportional counters. This eliminates a mathematical transformation of the absorbed dose from D(y) to D(L). We evaluate the inherent error in computing Q from D(y) rather than D(L) for neutron spectra below 4 MeV. The effects of neutron energy and simulated tissue diameters within a gas cavity are examined in detail. (author)

  6. Bandwagon effects and error bars in particle physics

    Science.gov (United States)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  7. Bandwagon effects and error bars in particle physics

    International Nuclear Information System (INIS)

    Jeng, Monwhea

    2007-01-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit 'bandwagon effects': reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations

  8. Surface characterization by energy distribution measurements of secondary electrons and of ion-induced electrons

    International Nuclear Information System (INIS)

    Bauer, H.E.; Seiler, H.

    1988-01-01

    Instruments for surface microanalysis (e.g. scanning electron or ion microprobes, emission electron or ion microscopes) use the current of emitted secondary electrons or of emitted ion-induced electrons for imaging of the analysed surface. These currents, integrating over all energies of the emitted low energy electrons, are however, not well suited to surface analytical purposes. On the contrary, the energy distribution of these electrons is extremely surface-sensitive with respect to shape, size, width, most probable energy, and cut-off energy. The energy distribution measurements were performed with a cylindrical mirror analyser and converted into N(E), if necessary. Presented are energy spectra of electrons released by electrons and argon ions of some contaminated and sputter cleaned metals, the change of the secondary electron energy distribution from oxidized aluminium to clean aluminium, and the change of the cut-off energy due to work function change of oxidized aluminium, and of a silver layer on a platinum sample. The energy distribution of the secondary electrons often shows detailed structures, probably due to low-energy Auger electrons, and is broader than the energy distribution of ion-induced electrons of the same object point. (author)

  9. Comparison Spatial Pattern of Land Surface Temperature with Mono Window Algorithm and Split Window Algorithm: A Case Study in South Tangerang, Indonesia

    Science.gov (United States)

    Bunai, Tasya; Rokhmatuloh; Wibowo, Adi

    2018-05-01

    In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.

  10. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Calibration of GOES-derived solar radiation data using a distributed network of surface measurements in Florida, USA

    Science.gov (United States)

    Sumner, David M.; Pathak, Chandra S.; Mecikalski, John R.; Paech, Simon J.; Wu, Qinglong; Sangoyomi, Taiye; Babcock, Roger W.; Walton, Raymond

    2008-01-01

    Solar radiation data are critically important for the estimation of evapotranspiration. Analysis of visible-channel data derived from Geostationary Operational Environmental Satellites (GOES) using radiative transfer modeling has been used to produce spatially- and temporally-distributed datasets of solar radiation. An extensive network of (pyranometer) surface measurements of solar radiation in the State of Florida has allowed refined calibration of a GOES-derived daily integrated radiation data product. This refinement of radiation data allowed for corrections of satellite sensor drift, satellite generational change, and consideration of the highly-variable cloudy conditions that are typical of Florida. To aid in calibration of a GOES-derived radiation product, solar radiation data for the period 1995–2004 from 58 field stations that are located throughout the State were compiled. The GOES radiation product was calibrated by way of a three-step process: 1) comparison with ground-based pyranometer measurements on clear reference days, 2) correcting for a bias related to cloud cover, and 3) deriving month-by-month bias correction factors. Pre-calibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m–2 day–1 (13 percent). Calibration reduced errors to 1.7 MJ m–2 day–1 (10 percent) and also removed time- and cloudiness-related biases. The final dataset has been used to produce Statewide evapotranspiration estimates.

  12. Methods for reconstruction of the density distribution of nuclear power

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2015-01-01

    Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and

  13. Surface behaviour of the phase-space distribution for heavy nuclei

    International Nuclear Information System (INIS)

    Durand, M.

    1987-06-01

    A part of the oscillations of the phase space distribution function is shown to be a surface effect. A series expansion for this function is given, which takes partially into account this oscillatory structure

  14. Mean Bias in Seasonal Forecast Model and ENSO Prediction Error.

    Science.gov (United States)

    Kim, Seon Tae; Jeong, Hye-In; Jin, Fei-Fei

    2017-07-20

    This study uses retrospective forecasts made using an APEC Climate Center seasonal forecast model to investigate the cause of errors in predicting the amplitude of El Niño Southern Oscillation (ENSO)-driven sea surface temperature variability. When utilizing Bjerknes coupled stability (BJ) index analysis, enhanced errors in ENSO amplitude with forecast lead times are found to be well represented by those in the growth rate estimated by the BJ index. ENSO amplitude forecast errors are most strongly associated with the errors in both the thermocline slope response and surface wind response to forcing over the tropical Pacific, leading to errors in thermocline feedback. This study concludes that upper ocean temperature bias in the equatorial Pacific, which becomes more intense with increasing lead times, is a possible cause of forecast errors in the thermocline feedback and thus in ENSO amplitude.

  15. Charge-state distribution of MeV He ions scattered from the surface atoms

    International Nuclear Information System (INIS)

    Kimura, Kenji; Ohtsuka, Hisashi; Mannami, Michihiko

    1993-01-01

    The charge-state distribution of 500-keV He ions scattered from a SnTe (001) surface has been investigated using a new technique of high-resolution high-energy ion scattering spectroscopy. The observed charge-state distribution of ions scattered from the topmost atomic layer coincides with that of ions scattered from the subsurface region and does not depend on the incident charge state but depends on the exit angle. The observed exit-angle dependence is explained by a model which includes the charge-exchange process with the valence electrons in the tail of the electron distribution at the surface. (author)

  16. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    Science.gov (United States)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial

  17. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  18. The influence of spherical cavity surface charge distribution on the sequence of partial discharge events

    International Nuclear Information System (INIS)

    Illias, Hazlee A; Chen, George; Lewin, Paul L

    2011-01-01

    In this work, a model representing partial discharge (PD) behaviour of a spherical cavity within a homogeneous dielectric material has been developed to study the influence of cavity surface charge distribution on the electric field distribution in both the cavity and the material itself. The charge accumulation on the cavity surface after a PD event and charge movement along the cavity wall under the influence of electric field magnitude and direction has been found to affect the electric field distribution in the whole cavity and in the material. This in turn affects the likelihood of any subsequent PD activity in the cavity and the whole sequence of PD events. The model parameters influencing cavity surface charge distribution can be readily identified; they are the cavity surface conductivity, the inception field and the extinction field. Comparison of measurement and simulation results has been undertaken to validate the model.

  19. The influence of spherical cavity surface charge distribution on the sequence of partial discharge events

    Energy Technology Data Exchange (ETDEWEB)

    Illias, Hazlee A [Department of Electrical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia); Chen, George; Lewin, Paul L, E-mail: h.illias@um.edu.my [Tony Davies High Voltage Laboratory, School of Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ (United Kingdom)

    2011-06-22

    In this work, a model representing partial discharge (PD) behaviour of a spherical cavity within a homogeneous dielectric material has been developed to study the influence of cavity surface charge distribution on the electric field distribution in both the cavity and the material itself. The charge accumulation on the cavity surface after a PD event and charge movement along the cavity wall under the influence of electric field magnitude and direction has been found to affect the electric field distribution in the whole cavity and in the material. This in turn affects the likelihood of any subsequent PD activity in the cavity and the whole sequence of PD events. The model parameters influencing cavity surface charge distribution can be readily identified; they are the cavity surface conductivity, the inception field and the extinction field. Comparison of measurement and simulation results has been undertaken to validate the model.

  20. The global distribution and dynamics of surface soil moisture

    Science.gov (United States)

    McColl, Kaighin A.; Alemohammad, Seyed Hamed; Akbar, Ruzbeh; Konings, Alexandra G.; Yueh, Simon; Entekhabi, Dara

    2017-01-01

    Surface soil moisture has a direct impact on food security, human health and ecosystem function. It also plays a key role in the climate system, and the development and persistence of extreme weather events such as droughts, floods and heatwaves. However, sparse and uneven observations have made it difficult to quantify the global distribution and dynamics of surface soil moisture. Here we introduce a metric of soil moisture memory and use a full year of global observations from NASA's Soil Moisture Active Passive mission to show that surface soil moisture--a storage believed to make up less than 0.001% of the global freshwater budget by volume, and equivalent to an, on average, 8-mm thin layer of water covering all land surfaces--plays a significant role in the water cycle. Specifically, we find that surface soil moisture retains a median 14% of precipitation falling on land after three days. Furthermore, the retained fraction of the surface soil moisture storage after three days is highest over arid regions, and in regions where drainage to groundwater storage is lowest. We conclude that lower groundwater storage in these regions is due not only to lower precipitation, but also to the complex partitioning of the water cycle by the surface soil moisture storage layer at the land surface.

  1. Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins.

    Science.gov (United States)

    Nguyen, Bao Linh; Pettitt, B Montgomery

    2015-04-14

    The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations.

  2. SU-G-JeP3-02: Comparison of Magnitude and Frequency of Patient Positioning Errors in Breast Irradiation Using AlignRT 3D Optical Surface Imaging and Skin Mark Techniques

    International Nuclear Information System (INIS)

    Yao, R; Chisela, W; Dorbu, G

    2016-01-01

    Purpose: To evaluate clinical usefulness of AlignRT (Vision RT Ltd., London, UK) in reducing patient positioning errors in breast irradiation. Methods: 60 patients undergoing whole breast irradiation were selected for this study. Patients were treated to the left or right breast lying on Qfix Access breast board (Qfix, Avondale, PA) in supine position for 28 fractions using tangential fields. 30 patients were aligned using AlignRT by aligning a breast surface region of interest (ROI) to the same area from a reference surface image extracted from planning CT. When the patient’s surface image deviated from the reference by more than 3mm on one or more translational and rotational directions, a new reference was acquired using AlignRT in-room cameras. The other 30 patients were aligned to the skin marks with room lasers. On-Board MV portal images of medial field were taken daily and matched to the DRRs. The magnitude and frequency of positioning errors were determined from measured translational shifts. Kolmogorov-Smirnov test was used to evaluate statistical differences of positional accuracy and precision between AlignRT and non-AlignRT patients. Results: The percentage of port images with no shift required was 46.5% and 27.0% in vertical, 49.8% and 25.8% in longitudinal, 47.6% and 28.5% in lateral for AlignRT and non-AlignRT patients, respectively. The percentage of port images requiring more than 3mm shifts was 18.1% and 35.1% in vertical, 28.6% and 50.8% in longitudinal, 11.3% and 24.2% in lateral for AlignRT and non-AlignRT patients, respectively. Kolmogorov-Smirnov test showed that there were significant differences between the frequency distributions of AlignRT and non-AlignRT in vertical, longitudinal, and lateral shifts. Conclusion: As confirmed by port images, AlignRT-assisted patient positioning can significantly reduce the frequency and magnitude of patient setup errors in breast irradiation compared to the use of lasers and skin marks.

  3. Inference on rare errors using asymptotic expansions and bootstrap calibration

    NARCIS (Netherlands)

    R. Helmers (Roelof)

    1998-01-01

    textabstractThe number of items in error in an audit population is usually quite small, whereas the error distribution is typically highly skewed to the right. For applications in statistical auditing, where line item sampling is appropriate, a new upper confidence limit for the total error amount

  4. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  5. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  6. Satellite-based Calibration of Heat Flux at the Ocean Surface

    Science.gov (United States)

    Barron, C. N.; Dastugue, J. M.; May, J. C.; Rowley, C. D.; Smith, S. R.; Spence, P. L.; Gremes-Cordero, S.

    2016-02-01

    Model forecasts of upper ocean heat content and variability on diurnal to daily scales are highly dependent on estimates of heat flux through the air-sea interface. Satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. Traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle. Subsequent evolution depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. The COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates) endeavors to correct ocean forecast bias through a responsive error partition among surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using Navy operational global or regional atmospheric forcing. COFFEE addresses satellite-calibration of surface fluxes to estimate surface error covariances and links these to the ocean interior. Experiment cases combine different levels of flux calibration with different assimilation alternatives. The cases may use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger

  7. CLINICAL SURFACES - Activity-Based Computing for Distributed Multi-Display Environments in Hospitals

    Science.gov (United States)

    Bardram, Jakob E.; Bunde-Pedersen, Jonathan; Doryab, Afsaneh; Sørensen, Steffen

    A multi-display environment (MDE) is made up of co-located and networked personal and public devices that form an integrated workspace enabling co-located group work. Traditionally, MDEs have, however, mainly been designed to support a single “smart room”, and have had little sense of the tasks and activities that the MDE is being used for. This paper presents a novel approach to support activity-based computing in distributed MDEs, where displays are physically distributed across a large building. CLINICAL SURFACES was designed for clinical work in hospitals, and enables context-sensitive retrieval and browsing of patient data on public displays. We present the design and implementation of CLINICAL SURFACES, and report from an evaluation of the system at a large hospital. The evaluation shows that using distributed public displays to support activity-based computing inside a hospital is very useful for clinical work, and that the apparent contradiction between maintaining privacy of medical data in a public display environment can be mitigated by the use of CLINICAL SURFACES.

  8. Surface drift prediction in the Adriatic Sea using hyper-ensemble statistics on atmospheric, ocean and wave models: Uncertainties and probability distribution areas

    Science.gov (United States)

    Rixen, M.; Ferreira-Coelho, E.; Signell, R.

    2008-01-01

    Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).

  9. Spatial and temporal distributions of surface mass balance between Concordia and Vostok stations, Antarctica, from combined radar and ice core data: first results and detailed error analysis

    Directory of Open Access Journals (Sweden)

    E. Le Meur

    2018-05-01

    Full Text Available Results from ground-penetrating radar (GPR measurements and shallow ice cores carried out during a scientific traverse between Dome Concordia (DC and Vostok stations are presented in order to infer both spatial and temporal characteristics of snow accumulation over the East Antarctic Plateau. Spatially continuous accumulation rates along the traverse are computed from the identification of three equally spaced radar reflections spanning about the last 600 years. Accurate dating of these internal reflection horizons (IRHs is obtained from a depth–age relationship derived from volcanic horizons and bomb testing fallouts on a DC ice core and shows a very good consistency when tested against extra ice cores drilled along the radar profile. Accumulation rates are then inferred by accounting for density profiles down to each IRH. For the latter purpose, a careful error analysis showed that using a single and more accurate density profile along a DC core provided more reliable results than trying to include the potential spatial variability in density from extra (but less accurate ice cores distributed along the profile. The most striking feature is an accumulation pattern that remains constant through time with persistent gradients such as a marked decrease from 26 mm w.e. yr−1 at DC to 20 mm w.e. yr−1 at the south-west end of the profile over the last 234 years on average (with a similar decrease from 25 to 19 mm w.e. yr−1 over the last 592 years. As for the time dependency, despite an overall consistency with similar measurements carried out along the main East Antarctic divides, interpreting possible trends remains difficult. Indeed, error bars in our measurements are still too large to unambiguously infer an apparent time increase in accumulation rate. For the proposed absolute values, maximum margins of error are in the range 4 mm w.e. yr−1 (last 234 years to 2 mm w.e. yr−1 (last 592 years, a

  10. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  11. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  12. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  13. Distribution of 137Cs in the Surface Soil of Serpong Nuclear Site

    International Nuclear Information System (INIS)

    Lubis, E.

    2011-01-01

    The distribution of 137 Cs in the surface soil layer of Serpong Nuclear Site (SNS) was investigated by field sampling. The Objectives of the investigation is finding the profile of 137 Cs distribution in the surface soil and the T f value that can be used for estimation of radiation dose from livestock product-man pathways. The results indicates that the 137 Cs activity in surface soil of SNS is 0.80 ± 0.29 Bq/kg, much lower than in the Antarctic. The contribution value of 137 Cs from the operation of G.A. Siwabessy Reactor until now is undetectable. The T f of 137 Cs from surface soil to Panisetum Purpureum, Setaria Spha Celata and Imperata Cylindrica grasses were 0.71 ± 0.14, 0.84 ± 0.27 and 0.81 ± 0.11 respectively. The results show that value of the transfer factor of 137 Cs varies between cultivated and uncultivated soil and also with the soils with thick humus. (author)

  14. Distribution of 137Cs In the Surface Soil of Serpong Nuclear Site

    Directory of Open Access Journals (Sweden)

    E. Lubis

    2011-08-01

    Full Text Available The distribution of 137Cs in the surface soil layer of Serpong Nuclear Site (SNS was investigated by field sampling. The Objectives of the investigation is finding the profile of 137Cs distribution in the surface soil and the Tf value that can be used for estimation of radiation dose from livestock product-man pathways. The results indicates that the 137Cs activity in surface soil of SNS is 0.80 ± 0,29 Bq/kg, much lower than in the Antarctic. The contribution value of 137Cs from the operation of G.A.Siwabessy Reactor until now is undetectable. The Tf of 137Cs from surface soil to Panisetum Purpureum, Setaria Spha Celata and Imperata Cylindrica grasses were 0.71 ± 0.14, 0.84 ± 0.27 and 0.81 ± 0.11 respectively. The results show that value of the transfer factor of 137Cs varies between cultivated and uncultivated soil and also with the soils with thick humus

  15. Evaluation of errors for mass-spectrometric analysis with surface-ionization type mass-spectrometer (statistical evaluation of mass-discrimination effect)

    International Nuclear Information System (INIS)

    Wada, Y.

    1981-01-01

    The surface-ionization type mass-spectrometer is widely used as an apparatus for quality assurance, accountability and safeguarding of nuclear materials, and for this analysis it has become an important factor to statistically evaluate an analytical error which consists of a random error and a systematic error. The major factor of this systematic error was the mass-discrimination effect. In this paper, various assays for evaluating the factor of variation on the mass-discrimination effect were studied and the data obtained were statistically evaluated. As a result of these analyses, it was proved that the factor of variation on the mass-discrimination effect was not attributed to the acid concentration of sample, sample size on the filament and supplied voltage for a multiplier, but mainly to the filament temperature during the mass-spectrometric analysis. The mass-discrimination effect values β which were usually calculated from the measured data of uranium, plutonium or boron isotopic standard sample were not so significant dependently of the difference of U-235, Pu-239 or B-10 isotopic abundance. Furthermore, in the case of U and Pu, measurement conditions and the mass range of these isotopes were almost similar, and these values β were not statistically significant between U and Pu. On the other hand, the value β for boron was about a third of the value β for U or Pu, but compared with the coefficient of the correction on the mass-discrimination effect for the difference of mass-number, ΔM, these coefficient values were almost the same among U, Pu, and B.As for the isotopic analysis error of U, Pu, Nd and B, it was proved that the isotopic abundance of these elements and the isotopic analysis error were in a relationship of quadratic curves on a logarithmic-logarithmic scale

  16. At least some errors are randomly generated (Freud was wrong)

    Science.gov (United States)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  17. Using semi-variogram analysis for providing spatially distributed information on soil surface condition for land surface modeling

    Science.gov (United States)

    Croft, Holly; Anderson, Karen; Kuhn, Nikolaus J.

    2010-05-01

    The ability to quantitatively and spatially assess soil surface roughness is important in geomorphology and land degradation studies. Soils can experience rapid structural degradation in response to land cover changes, resulting in increased susceptibility to erosion and a loss of Soil Organic Matter (SOM). Changes in soil surface condition can also alter sediment detachment, transport and deposition processes, infiltration rates and surface runoff characteristics. Deriving spatially distributed quantitative information on soil surface condition for inclusion in hydrological and soil erosion models is therefore paramount. However, due to the time and resources involved in using traditional field sampling techniques, there is a lack of spatially distributed information on soil surface condition. Laser techniques can provide data for a rapid three dimensional representation of the soil surface at a fine spatial resolution. This provides the ability to capture changes at the soil surface associated with aggregate breakdown, flow routing, erosion and sediment re-distribution. Semi-variogram analysis of the laser data can be used to represent spatial dependence within the dataset; providing information about the spatial character of soil surface structure. This experiment details the ability of semi-variogram analysis to spatially describe changes in soil surface condition. Soil for three soil types (silt, silt loam and silty clay) was sieved to produce aggregates between 1 mm and 16 mm in size and placed evenly in sample trays (25 x 20 x 2 cm). Soil samples for each soil type were exposed to five different durations of artificial rainfall, to produce progressively structurally degraded soil states. A calibrated laser profiling instrument was used to measure surface roughness over a central 10 x 10 cm plot of each soil state, at 2 mm sample spacing. The laser data were analysed within a geostatistical framework, where semi-variogram analysis quantitatively represented

  18. Nonlinear error-field penetration in low density ohmically heated tokamak plasmas

    International Nuclear Information System (INIS)

    Fitzpatrick, R

    2012-01-01

    A theory is developed to predict the error-field penetration threshold in low density, ohmically heated, tokamak plasmas. The novel feature of the theory is that the response of the plasma in the vicinity of the resonant surface to the applied error-field is calculated from nonlinear drift-MHD (magnetohydrodynamical) magnetic island theory, rather than linear layer theory. Error-field penetration, and subsequent locked mode formation, is triggered once the destabilizing effect of the resonant harmonic of the error-field overcomes the stabilizing effect of the ion polarization current (caused by the propagation of the error-field-induced island chain in the local ion fluid frame). The predicted scaling of the error-field penetration threshold with engineering parameters is (b r /B T ) crit ∼n e B T -1.8 R 0 -0.25 , where b r is the resonant harmonic of the vacuum radial error-field at the resonant surface, B T the toroidal magnetic field-strength, n e the electron number density at the resonant surface and R 0 the major radius of the plasma. This scaling—in particular, the linear dependence of the threshold with density—is consistent with experimental observations. When the scaling is used to extrapolate from JET to ITER, the predicted ITER error-field penetration threshold is (b r /B T ) crit ∼ 5 × 10 −5 , which just lies within the expected capabilities of the ITER error-field correction system. (paper)

  19. Effects of averaging over motion and the resulting systematic errors in radiation therapy

    International Nuclear Information System (INIS)

    Evans, Philip M; Coolens, Catherine; Nioutsikou, Elena

    2006-01-01

    The potential for systematic errors in radiotherapy of a breathing patient is considered using the statistical model of Bortfeld et al (2002 Phys. Med. Biol. 47 2203-20). It is shown that although averaging over 30 fractions does result in a narrow Gaussian distribution of errors, as predicted by the central limit theorem, the fact that one or a few samples of the breathing patient's motion distribution are used for treatment planning (in contrast to the many treatment fractions that are likely to be delivered) may result in a much larger error with a systematic component. The error distribution may be particularly large if a scan at breath-hold is used for planning. (note)

  20. Optimal threshold of error decision related to non-uniform phase distribution QAM signals generated from MZM based on OCS

    Science.gov (United States)

    Han, Xifeng; Zhou, Wen

    2018-03-01

    Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.

  1. Detecting surface runoff location in a small catchment using distributed and simple observation method

    Science.gov (United States)

    Dehotin, Judicaël; Breil, Pascal; Braud, Isabelle; de Lavenne, Alban; Lagouy, Mickaël; Sarrazin, Benoît

    2015-06-01

    Surface runoff is one of the hydrological processes involved in floods, pollution transfer, soil erosion and mudslide. Many models allow the simulation and the mapping of surface runoff and erosion hazards. Field observations of this hydrological process are not common although they are crucial to evaluate surface runoff models and to investigate or assess different kinds of hazards linked to this process. In this study, a simple field monitoring network is implemented to assess the relevance of a surface runoff susceptibility mapping method. The network is based on spatially distributed observations (nine different locations in the catchment) of soil water content and rainfall events. These data are analyzed to determine if surface runoff occurs. Two surface runoff mechanisms are considered: surface runoff by saturation of the soil surface horizon and surface runoff by infiltration excess (also called hortonian runoff). The monitoring strategy includes continuous records of soil surface water content and rainfall with a 5 min time step. Soil infiltration capacity time series are calculated using field soil water content and in situ measurements of soil hydraulic conductivity. Comparison of soil infiltration capacity and rainfall intensity time series allows detecting the occurrence of surface runoff by infiltration-excess. Comparison of surface soil water content with saturated water content values allows detecting the occurrence of surface runoff by saturation of the soil surface horizon. Automatic records were complemented with direct field observations of surface runoff in the experimental catchment after each significant rainfall event. The presented observation method allows the identification of fast and short-lived surface runoff processes at a small spatial and temporal resolution in natural conditions. The results also highlight the relationship between surface runoff and factors usually integrated in surface runoff mapping such as topography, rainfall

  2. The quantic distribution of mobile carriers in a surface charge coupled device

    International Nuclear Information System (INIS)

    Ionescu, M.

    1977-01-01

    The quantic distribution of the electrons in a surface charge coupled device (CCD), for a MIS structure with a real insulator (finite difference energy between the conduction bands of the insulator and of the semiconductor) is presented. A fundamental limitation of the charge transfer in a surface CCD is obtained. (author)

  3. Notes on power of normality tests of error terms in regression models

    International Nuclear Information System (INIS)

    Střelec, Luboš

    2015-01-01

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models

  4. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  5. Analysis of the interface tracking errors

    International Nuclear Information System (INIS)

    Cerne, G.; Tiselj, I.; Petelin, S.

    2001-01-01

    An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)

  6. Semiparametric Bernstein–von Mises for the error standard deviation

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a

  7. Semiparametric Bernstein-von Mises for the error standard deviation

    NARCIS (Netherlands)

    de Jonge, R.; van Zanten, H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein-von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a

  8. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Visualization of the distribution of surface-active block copolymers in PDMS-based coatings

    DEFF Research Database (Denmark)

    Noguer, A. Camós; Latipov, R.; Madsen, F. B.

    2018-01-01

    the distribution and release of these block copolymers from PDMS-based coatings has been previously reported. However, the distribution and behaviour of these compounds in the bulk of the PDMS coating are not fully understood. A novel fluorescent-labelled triblock PEG-b-PDMS-b-PEG copolymer was synthesized...... results in non-specific protein adsorption and wettability issues. Poly(ethylene glycol)-based surface-active block copolymers and surfactants have been added to PDMS coatings and films to impart biofouling resistance and hydrophilicity to the PDMS surface with successful results. Information regarding...

  10. Temperature error in digital bathythermograph data

    Digital Repository Service at National Institute of Oceanography (India)

    Pankajakshan, T.; Reddy, G.V.; Ratnakaran, L.; Sarupria, J.S.; RameshBabu, V.

    Sciences Vol. 32(3), September 2003, pp. 234-236 Short Communication Temperature error in digital bathythermograph data Thadathil Pankajakshan, G. V. Reddy, Lasitha Ratnakaran, J. S. Sarupria & V. Ramesh Babu Data and Information Division... Oceanographic Data Centre (JODC) 17,305 Short communication 235 Mean difference between DBT and Nansen temperature (here after referred to ‘error’) from surface to 800 m depth and for the two cruises is given in Fig. 3. Error bars are provided...

  11. Effect of processing conditions on residual stress distributions by bead-on-plate welding after surface machining

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Mochizuki, Masahito

    2014-01-01

    Residual stress is important factor for stress corrosion cracking (SCC) that has been observed near the welded zone in nuclear power plants. Especially, surface residual stress is significant for SCC initiation. In the joining processes of pipes, butt welding is conducted after surface machining. Residual stress is generated by both processes, and residual stress distribution due to surface machining is varied by the subsequent butt welding. In previous paper, authors reported that residual stress distribution generated by bead on plate welding after surface machining has a local maximum residual stress near the weld metal. The local maximum residual stress shows approximately 900 MPa that exceeds the stress threshold for SCC initiation. Therefore, for the safety improvement of nuclear power plants, a study on the local maximum residual stress is important. In this study, the effect of surface machining and welding conditions on residual stress distribution generated by welding after surface machining was investigated. Surface machining using lathe machine and bead on plate welding with tungsten inert gas (TIG) arc under various conditions were conducted for plate specimens made of SUS316L. Then, residual stress distributions were measured by X-ray diffraction method (XRD). As a result, residual stress distributions have the local maximum residual stress near the weld metal in all specimens. The values of the local maximum residual stresses are almost the same. The location of the local maximum residual stress is varied by welding condition. It could be consider that the local maximum residual stress is generated by same generation mechanism as welding residual stress in surface machined layer that has high yield stress. (author)

  12. Surface water assessment on the influence of space distribution on ...

    African Journals Online (AJOL)

    In this work, the influence of space distribution on physico-chemical parameters of refinery effluent discharge has been studied, using treated effluent water discharged from the Port Harcourt Refinery Company (PHRC) into the Ekerekana Creek in Okrika as reference. Samples were collected at surface level from the ...

  13. Albedo distribution in Lutzow-Holm Bay and its neighborhood

    Directory of Open Access Journals (Sweden)

    Kiyotaka Nakagawa

    1997-03-01

    Full Text Available A method has been developed for estimating the filtered narrow band surface albedo with NOAA/AVHRR data, and has been applied to analysis of the surface albedo distribution in Lutzow-Holm Bay and its neighborhood, Antarctica, in 1990. As a result, 16 maps of the surface albedo distribution have been drawn. From a comparison of the albedos inferred from satellite data with those actually observed in Ongul Strait, it is clear that the satellite-inferred, filtered narrow band albedos agree well with the daily means of ground-observed, unfiltered broad band albedo, despite systematic errors of about -4%. It is also clear that there is a characteristic pattern of surface albedo distribution in this area; the open sea has very low albedo of less than 5%, whereas most of the compact pack ice and fast ice has a high albedo of more than 60%. The albedo is lower in the eastern part of Lutzow-Holm Bay than in the western part; especially off the Soya Coast it is less than 40%. The ice sheet of Antarctica has a remarkably high albedo of more than 80%.

  14. Surface characterization protocol for precision aspheric optics

    Science.gov (United States)

    Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra

    2017-10-01

    In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.

  15. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  16. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  17. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  18. Conjugate descent formulation of backpropagation error in feedforward neural networks

    Directory of Open Access Journals (Sweden)

    NK Sharma

    2009-06-01

    Full Text Available The feedforward neural network architecture uses backpropagation learning to determine optimal weights between different interconnected layers. This learning procedure uses a gradient descent technique applied to a sum-of-squares error function for the given input-output pattern. It employs an iterative procedure to minimise the error function for a given set of patterns, by adjusting the weights of the network. The first derivates of the error with respect to the weights identify the local error surface in the descent direction. Hence the network exhibits a different local error surface for every different pattern presented to it, and weights are iteratively modified in order to minimise the current local error. The determination of an optimal weight vector is possible only when the total minimum error (mean of the minimum local errors for all patterns from the training set may be minimised. In this paper, we present a general mathematical formulation for the second derivative of the error function with respect to the weights (which represents a conjugate descent for arbitrary feedforward neural network topologies, and we use this derivative information to obtain the optimal weight vector. The local error is backpropagated among the units of hidden layers via the second order derivative of the error with respect to the weights of the hidden and output layers independently and also in combination. The new total minimum error point may be evaluated with the help of the current total minimum error and the current minimised local error. The weight modification processes is performed twice: once with respect to the present local error and once more with respect to the current total or mean error. We present some numerical evidence that our proposed method yields better network weights than those determined via a conventional gradient descent approach.

  19. Prescribing errors in a Brazilian neonatal intensive care unit

    Directory of Open Access Journals (Sweden)

    Ana Paula Cezar Machado

    2015-12-01

    Full Text Available Abstract Pediatric patients, especially those admitted to the neonatal intensive care unit (ICU, are highly vulnerable to medication errors. This study aimed to measure the prescription error rate in a university hospital neonatal ICU and to identify susceptible patients, types of errors, and the medicines involved. The variables related to medicines prescribed were compared to the Neofax prescription protocol. The study enrolled 150 newborns and analyzed 489 prescription order forms, with 1,491 medication items, corresponding to 46 drugs. Prescription error rate was 43.5%. Errors were found in dosage, intervals, diluents, and infusion time, distributed across 7 therapeutic classes. Errors were more frequent in preterm newborns. Diluent and dosing were the most frequent sources of errors. The therapeutic classes most involved in errors were antimicrobial agents and drugs that act on the nervous and cardiovascular systems.

  20. Parallax error in the monocular head-mounted eye trackers

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2012-01-01

    each parameter affects the error. The optimum distribution of the error (magnitude and direction) in the field of view varies for different applications. However, the results can be used for finding the optimum parameters that are needed for designing a head-mounted gaze tracker. It has been shown...

  1. Pesticides distribution in surface waters and sediments of lotic and ...

    African Journals Online (AJOL)

    An investigation on the availability and distribution of Lindane (HCHs) and Total organochlorine phosphate (TOCP) in the surface waters and sediments of selected water bodies in Agbede wetlands was carried out from December, 2012 to May, 2014 in order to cover seasonal trends in both matrixes. A Gas Chromatograph ...

  2. Semiparametric Bernstein–von Mises for the error standard deviation

    OpenAIRE

    Jonge, de, R.; Zanten, van, J.H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...

  3. Error sources in the real-time NLDAS incident surface solar radiation and an evaluation against field observations and the NARR

    Science.gov (United States)

    Park, G.; Gao, X.; Sorooshian, S.

    2005-12-01

    The atmospheric model is sensitive to the land surface interactions and its coupling with Land surface Models (LSMs) leads to a better ability to forecast weather under extreme climate conditions, such as droughts and floods (Atlas et al. 1993; Beljaars et al. 1996). However, it is still questionable how accurately the surface exchanges can be simulated using LSMs, since terrestrial properties and processes have high variability and heterogeneity. Examinations with long-term and multi-site surface observations including both remotely sensed and ground observations are highly needed to make an objective evaluation on the effectiveness and uncertainty of LSMs at different circumstances. Among several atmospheric forcing required for the offline simulation of LSMs, incident surface solar radiation is one of the most significant components, since it plays a major role in total incoming energy into the land surface. The North American Land Data Assimilation System (NLDAS) and North American Regional Reanalysis (NARR) are two important data sources providing high-resolution surface solar radiation data for the use of research communities. In this study, these data are evaluated against field observations (AmeriFlux) to identify their advantages, deficiencies and sources of errors. The NLDAS incident solar radiation shows a pretty good agreement in monthly mean prior to the summer of 2001, while it overestimates after the summer of 2001 and its bias is pretty close to the EDAS. Two main error sources are identified: 1) GOES solar radiation was not used in the NLDAS for several months in 2001 and 2003, and 2) GOES incident solar radiation when available, was positively biased in year 2002. The known snow detection problem is sometimes identified in the NLDAS, since it is inherited from GOES incident solar radiation. The NARR consistently overestimates incident surface solar radiation, which might produce erroneous outputs if used in the LSMs. Further attention is given to

  4. Distribution of local magnetic field of vortex lattice near anisotropic superconductor surface in inclined external fields

    International Nuclear Information System (INIS)

    Efremova, S.A.; Tsarevskij, S.L.

    1997-01-01

    Magnetic field distribution in a unit cell of the Abrikosov vortex lattice near the surface of monoaxial anisotropic type-ii superconductors in inclined external magnetic field has been found in the framework of London model for the cases when the symmetry axis is perpendicular and parallel to the superconductor surface interface. Distribution of local magnetic field as a function of the distance from the superconductor interface surface and external field inclination angle has been obtained. Using high-Tc superconductor Y-Ba-Cu-O by way of examples, it has been shown that the study of local magnetic field distribution function, depending on external magnetic field inclination angle towards the superconductor symmetry axis and towards the superconductor surface, can provide important data on anisotropic properties of the superconductor [ru

  5. Imaging performance of annular apertures. VI - Limitations by optical surface deviations

    Science.gov (United States)

    Tschunko, Hubert F. A.

    1987-01-01

    The performance of optical systems is limited by imperfect optical surfaces that degrade the images below the level set by wave theoretical limits. The central irradiance functions are derived for slit and circular apertures with five distributions of wavefront errors and for a range of maximal wavefront deviations. For practical frequency of occurrence distributions of wavefront deviations, the point spread and the image energy integral functions are determined. Practical performances of optical systems are derived and performance limits discussed.

  6. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    Science.gov (United States)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  7. Approaching Error-Free Customer Satisfaction through Process Change and Feedback Systems

    Science.gov (United States)

    Berglund, Kristin M.; Ludwig, Timothy D.

    2009-01-01

    Employee-based errors result in quality defects that can often impact customer satisfaction. This study examined the effects of a process change and feedback system intervention on error rates of 3 teams of retail furniture distribution warehouse workers. Archival records of error codes were analyzed and aggregated as the measure of quality. The…

  8. Effect of attenuation correction on surface amplitude distribution of wind waves

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.

    Some selected wave profiles recorded using a ship borne wave recorder are analysed to study the effect of attenuation correction on the distribution of the surface amplitudes. A new spectral width parameter is defined to account for wide band...

  9. Calibration of a distributed hydrology and land surface model using energy flux measurements

    DEFF Research Database (Denmark)

    Larsen, Morten Andreas Dahl; Refsgaard, Jens Christian; Jensen, Karsten H.

    2016-01-01

    In this study we develop and test a calibration approach on a spatially distributed groundwater-surface water catchment model (MIKE SHE) coupled to a land surface model component with particular focus on the water and energy fluxes. The model is calibrated against time series of eddy flux measure...

  10. Fast, efficient error reconciliation for quantum cryptography

    International Nuclear Information System (INIS)

    Buttler, W.T.; Lamoreaux, S.K.; Torgerson, J.R.; Nickel, G.H.; Donahue, C.H.; Peterson, C.G.

    2003-01-01

    We describe an error-reconciliation protocol, which we call Winnow, based on the exchange of parity and Hamming's 'syndrome' for N-bit subunits of a large dataset. The Winnow protocol was developed in the context of quantum-key distribution and offers significant advantages and net higher efficiency compared to other widely used protocols within the quantum cryptography community. A detailed mathematical analysis of the Winnow protocol is presented in the context of practical implementations of quantum-key distribution; in particular, the information overhead required for secure implementation is one of the most important criteria in the evaluation of a particular error-reconciliation protocol. The increase in efficiency for the Winnow protocol is largely due to the reduction in authenticated public communication required for its implementation

  11. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  12. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  13. Accuracy Enhancement with Processing Error Prediction and Compensation of a CNC Flame Cutting Machine Used in Spatial Surface Operating Conditions

    Directory of Open Access Journals (Sweden)

    Shenghai Hu

    2017-04-01

    Full Text Available This study deals with the precision performance of the CNC flame-cutting machine used in spatial surface operating conditions and presents an accuracy enhancement method based on processing error modeling prediction and real-time compensation. Machining coordinate systems and transformation matrix models were established for the CNC flame processing system considering both geometric errors and thermal deformation effects. Meanwhile, prediction and compensation models were constructed related to the actual cutting situation. Focusing on the thermal deformation elements, finite element analysis was used to measure the testing data of thermal errors, the grey system theory was applied to optimize the key thermal points, and related thermal dynamics models were carried out to achieve high-precision prediction values. Comparison experiments between the proposed method and the teaching method were conducted on the processing system after performing calibration. The results showed that the proposed method is valid and the cutting quality could be improved by more than 30% relative to the teaching method. Furthermore, the proposed method can be used under any working condition by making a few adjustments to the prediction and compensation models.

  14. THE FEATURES OF LASER EMISSION ENERGY DISTRIBUTION AT MATHEMATIC MODELING OF WORKING PROCESS

    Directory of Open Access Journals (Sweden)

    A. M. Avsiyevich

    2013-01-01

    Full Text Available The space laser emission energy distribution of different continuous operation settings depends from many factors, first on the settings design. For more accurate describing of multimode laser emission energy distribution intensity the experimental and theoretic model, which based on experimental laser emission distribution shift presentation with given accuracy rating in superposition basic function form, is proposed. This model provides the approximation error only 2,2 percent as compared with 24,6 % and 61 % for uniform and Gauss approximation accordingly. The proposed model usage lets more accurate take into consideration the laser emission and working surface interaction peculiarity, increases temperature fields calculation accuracy for mathematic modeling of laser treatment processes. The method of experimental laser emission energy distribution studying for given source and mathematic apparatus for calculation of laser emission energy distribution intensity parameters depended from the distance in radial direction on surface heating zone are shown.

  15. Confidentiality of 2D Code using Infrared with Cell-level Error Correction

    Directory of Open Access Journals (Sweden)

    Nobuyuki Teraura

    2013-03-01

    Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.

  16. Verification of land-atmosphere coupling in forecast models, reanalyses and land surface models using flux site observations.

    Science.gov (United States)

    Dirmeyer, Paul A; Chen, Liang; Wu, Jiexia; Shin, Chul-Su; Huang, Bohua; Cash, Benjamin A; Bosilovich, Michael G; Mahanama, Sarith; Koster, Randal D; Santanello, Joseph A; Ek, Michael B; Balsamo, Gianpaolo; Dutra, Emanuel; Lawrence, D M

    2018-02-01

    We confront four model systems in three configurations (LSM, LSM+GCM, and reanalysis) with global flux tower observations to validate states, surface fluxes, and coupling indices between land and atmosphere. Models clearly under-represent the feedback of surface fluxes on boundary layer properties (the atmospheric leg of land-atmosphere coupling), and may over-represent the connection between soil moisture and surface fluxes (the terrestrial leg). Models generally under-represent spatial and temporal variability relative to observations, which is at least partially an artifact of the differences in spatial scale between model grid boxes and flux tower footprints. All models bias high in near-surface humidity and downward shortwave radiation, struggle to represent precipitation accurately, and show serious problems in reproducing surface albedos. These errors create challenges for models to partition surface energy properly and errors are traceable through the surface energy and water cycles. The spatial distribution of the amplitude and phase of annual cycles (first harmonic) are generally well reproduced, but the biases in means tend to reflect in these amplitudes. Interannual variability is also a challenge for models to reproduce. Our analysis illuminates targets for coupled land-atmosphere model development, as well as the value of long-term globally-distributed observational monitoring.

  17. Distribution and Characteristics of Boulder Halos at High Latitudes on Mars: Ground Ice and Surface Processes Drive Surface Reworking

    Science.gov (United States)

    Levy, J. S.; Fassett, C. I.; Rader, L. X.; King, I. R.; Chaffey, P. M.; Wagoner, C. M.; Hanlon, A. E.; Watters, J. L.; Kreslavsky, M. A.; Holt, J. W.; Russell, A. T.; Dyar, M. D.

    2018-02-01

    Boulder halos are circular arrangements of clasts present at Martian middle to high latitudes. Boulder halos are thought to result from impacts into a boulder-poor surficial unit that is rich in ground ice and/or sediments and that is underlain by a competent substrate. In this model, boulders are excavated by impacts and remain at the surface as the crater degrades. To determine the distribution of boulder halos and to evaluate mechanisms for their formation, we searched for boulder halos over 4,188 High Resolution Imaging Science Experiment images located between 50-80° north and 50-80° south latitude. We evaluate geological and climatological parameters at halo sites. Boulder halos are about three times more common in the northern hemisphere than in the southern hemisphere (19% versus 6% of images) and have size-frequency distributions suggesting recent Amazonian formation (tens to hundreds of millions of years). In the north, boulder halo sites are characterized by abundant shallow subsurface ice and high thermal inertia. Spatial patterns of halo distribution indicate that excavation of boulders from beneath nonboulder-bearing substrates is necessary for the formation of boulder halos, but that alone is not sufficient. Rather, surface processes either promote boulder halo preservation in the north or destroy boulder halos in the south. Notably, boulder halos predate the most recent period of near-surface ice emplacement on Mars and persist at the surface atop mobile regolith. The lifetime of observed boulders at the Martian surface is greater than the lifetime of the craters that excavated them. Finally, larger minimum boulder halo sizes in the north indicate thicker icy soil layers on average throughout climate variations driven by spin/orbit changes during the last tens to hundreds of millions of years.

  18. Analysis of the “naming game” with learning errors in communications

    OpenAIRE

    Yang Lou; Guanrong Chen

    2015-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is ...

  19. Detecting and correcting partial errors: Evidence for efficient control without conscious access.

    Science.gov (United States)

    Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B

    2014-09-01

    Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.

  20. The Impact of Model and Rainfall Forcing Errors on Characterizing Soil Moisture Uncertainty in Land Surface Modeling

    Science.gov (United States)

    Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.

    2013-01-01

    The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.

  1. A Geostatistical Approach to Indoor Surface Sampling Strategies

    DEFF Research Database (Denmark)

    Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg

    1990-01-01

    Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...

  2. Occurrence, distribution and risks of antibiotics in urban surface water in Beijing, China.

    Science.gov (United States)

    Li, Wenhui; Gao, Lihong; Shi, Yali; Liu, Jiemin; Cai, Yaqi

    2015-09-01

    The occurrence and distribution of 22 antibiotics, including eight fluoroquinolones, nine sulfonamides and five macrolides, were investigated in the urban surface waters in Beijing, China. A total of 360 surface water samples were collected from the main rivers and lakes in the urban area of Beijing monthly from July 2013 to June 2014 (except the frozen period). Laboratory analyses revealed that antibiotics were widely used and extensively distributed in the surface water of Beijing, and sulfonamides and fluoroquinolones were the predominant antibiotics with the average concentrations of 136 and 132 ng L(-1), respectively. A significant difference of antibiotic concentrations from different sampling sites was observed, and the southern and eastern regions of Beijing showed higher concentrations of antibiotics. Seasonal variation of the antibiotics in the urban surface water was also studied, and the highest level of antibiotics was found in November, which may be due to the low temperature and flow of the rivers during the period of cold weather. Risk assessment showed that several antibiotics might pose high ecological risks to aquatic organisms (algae and plants) in surface water, and more attention should be paid to the risk of antibiotics to the aquatic environment in Beijing.

  3. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  4. Effects of errors on the dynamic aperture of the Advanced Photon Source storage ring

    International Nuclear Information System (INIS)

    Bizek, H.; Crosbie, E.; Lessner, E.; Teng, L.; Wirsbinski, J.

    1991-01-01

    The individual tolerance limits for alignment errors and magnet fabrication errors in the 7-GeV Advanced Photon Source storage ring are determined by computer-simulated tracking. Limits are established for dipole strength and roll errors, quadrupole strength and alignment errors, sextupole strength and alignment errors, as well as higher order multipole strengths in dipole and quadrupole magnets. The effects of girder misalignments on the dynamic aperture are also studied. Computer simulations are obtained with the tracking program RACETRACK, with errors introduced from a user-defined Gaussian distribution, truncated at ±5 standard deviation units. For each error, the average and rms spread of the stable amplitudes are determined for ten distinct machines, defined as ten different seeds to the random distribution, and for five distinct initial directions of the tracking particle. 4 refs., 4 figs., 1 tab

  5. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  6. Distribution of radionuclides in surface seawater obtained by an aerial radiological survey

    International Nuclear Information System (INIS)

    Inomata, Yayoi; Aoyama, Michio; Hirose, Katsumi; Sanada, Yukihisa; Torii, Tatsuo; Tsubono, Takaki; Tsumune, Daisuke; Yamada, Masatoshi

    2014-01-01

    We investigated the distribution in seawater of anthropogenic radionuclides from the Fukushima Daiichi Nuclear Power Plant (FNPP1) as preliminary attempt using a rapid aerial radiological survey performed by the U.S. Department of Energy National Nuclear Security Administration on 18 April 2011. We found strong correlations between in-situ activities of 131 I, 134 Cs, and 137 Cs measured in surface seawater samples and gamma-ray peak count rates determined by the aerial survey (correlation coefficients were 0.89 for 131 I, 0.96 for 134 Cs, and 0.92 for 137 Cs). The offshore area of high radionuclide activity extended south and southeast from the FNPP1. The maximum activities of 131 I, 134 Cs, and 137 Cs were 329, 650, and 599 Bq L -1 , respectively. The 131 I/ 137 Cs ratio in surface water of the high-activity area ranged from 0.6 to 0.7. Considering the radioactive decay of 131 I (half-life 8.02 d), we determined that the radionuclides in this area were directly released from FNPP1 to the ocean. We confirm that aerial radiological surveys can be effective for investigating the surface distribution of anthropogenic radionuclides in seawater. Our model reproduced the distribution pattern of radionuclides derived from the FNPP1, although results simulated by a regional ocean model were underestimated. (author)

  7. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  8. Surface-deposition and distribution of the radon-decay products indoors

    International Nuclear Information System (INIS)

    Espinosa, G.; Tommasino, L.

    2015-01-01

    The exposure to radon-decay products is of great concern both in dwellings and workplaces. The model to estimate the lung dose refers to the deposition mechanisms and particle sizes. Unfortunately, most of the dose data available are based on the measurement of radon concentration and the concentration of radon decay products. These combined measurements are widely used in spite of the fact that accurate dose assessments require information on the particle deposition mechanisms and the spatial distribution of radon decay products indoors. Most of the airborne particles and/or radon decay products are deposited onto indoor surfaces, which deposition makes the radon decay products unavailable for inhalation. These deposition processes, if properly known, could be successfully exploited to reduce the exposure to radon decay products. In spite of the importance of the surface deposition of the radon decay products, both for the correct evaluation of the dose and for reducing the exposure; little or no efforts have been made to investigate these deposition processes. Recently, two parallel investigations have been carried out in Rome and at Universidad Nacional Autónoma de México (UNAM) in Mexico City respectively, which address the issue of the surface-deposited radon decay products. Even though these investigations have been carried independently, they complement one another. It is with these considerations in mind that it was decided to report both investigations in the same paper. - Highlights: • Distribution of Radon and Thoron decay indoor products. • Indoor radon measurements complexity. • Short and long term measurements of surface deposit of Radon and Thoron decay products. • Microclimate controlled conditions room. • Nuclear Tracks Detectors

  9. Homogeneous near surface activity distribution by double energy activation for TLA

    International Nuclear Information System (INIS)

    Takacs, S.; Ditroi, F.; Tarkanyi, F.

    2007-01-01

    Thin layer activation (TLA) is a versatile tool for activating thin surface layers in order to study real-time the surface loss by wear, corrosion or erosion processes of the activated parts, without disassembling or stopping running mechanical structures or equipment. The research problem is the determination of the irradiation parameters to produce point-like or large area optimal activity-depth distribution in the sample. Different activity-depth profiles can be produced depending on the type of the investigated material and the nuclear reaction used. To produce activity that is independent of the depth up to a certain depth is desirable when the material removed from the surface by wear, corrosion or erosion can be collected completely. By applying dual energy irradiation the thickness of this quasi-constant activity layer can be increased or the deviation of the activity distribution from a constant value can be minimized. In the main, parts made of metals and alloys are suitable for direct activation, but by using secondary particle implantation the wear of other materials can also be studied in a surface range a few micrometers thick. In most practical cases activation of a point-like spot (several mm 2 ) is enough to monitor the wear, corrosion or erosion, but for special problems relatively large surfaces areas of complicated spatial geometry need to be activated uniformly. Two ways are available for fulfilling this task, (1) production of large area beam spot or scanning the beam over the surface in question from the accelerator side, or (2) a programmed 3D movement of the sample from the target side. Taking into account the large variability of tasks occurring in practice, the latter method was chosen as the routine solution in our cyclotron laboratory

  10. Kalman filtering and smoothing for linear wave equations with model error

    International Nuclear Information System (INIS)

    Lee, Wonjung; McDougall, D; Stuart, A M

    2011-01-01

    Filtering is a widely used methodology for the incorporation of observed data into time-evolving systems. It provides an online approach to state estimation inverse problems when data are acquired sequentially. The Kalman filter plays a central role in many applications because it is exact for linear systems subject to Gaussian noise, and because it forms the basis for many approximate filters which are used in high-dimensional systems. The aim of this paper is to study the effect of model error on the Kalman filter, in the context of linear wave propagation problems. A consistency result is proved when no model error is present, showing recovery of the true signal in the large data limit. This result, however, is not robust: it is also proved that arbitrarily small model error can lead to inconsistent recovery of the signal in the large data limit. If the model error is in the form of a constant shift to the velocity, the filtering and smoothing distributions only recover a partial Fourier expansion, a phenomenon related to aliasing. On the other hand, for a class of wave velocity model errors which are time dependent, it is possible to recover the filtering distribution exactly, but not the smoothing distribution. Numerical results are presented which corroborate the theory, and also propose a computational approach which overcomes the inconsistency in the presence of model error, by relaxing the model

  11. Performance-based gear metrology kinematic, transmission, error computation and diagnosis

    CERN Document Server

    Mark, William D

    2012-01-01

    A mathematically rigorous explanation of how manufacturing deviations and damage on the working surfaces of gear teeth cause transmission-error contributions to vibration excitations Some gear-tooth working-surface manufacturing deviations of significant amplitude cause negligible vibration excitation and noise, yet others of minuscule amplitude are a source of significant vibration excitation and noise.   Presently available computer-numerically-controlled dedicated gear metrology equipment can measure such error patterns on a gear in a few hours in sufficient detail to enable

  12. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol

    International Nuclear Information System (INIS)

    Horoshko, D B

    2007-01-01

    The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)

  14. The Error Reporting in the ATLAS TDAQ system

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2014-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  15. The Error Reporting in the ATLAS TDAQ System

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2015-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  16. Distribution of {sup 137}Cs in the Surface Soil of Serpong Nuclear Site

    Energy Technology Data Exchange (ETDEWEB)

    Lubis, E., E-mail: erlub@batan.go.id [Center for Radioactive Waste Technology, National Nuclear Energy Agency, Serpong (Indonesia)

    2011-08-15

    The distribution of {sup 137}Cs in the surface soil layer of Serpong Nuclear Site (SNS) was investigated by field sampling. The Objectives of the investigation is finding the profile of {sup 137}Cs distribution in the surface soil and the T{sub f} value that can be used for estimation of radiation dose from livestock product-man pathways. The results indicates that the {sup 137}Cs activity in surface soil of SNS is 0.80 {+-} 0.29 Bq/kg, much lower than in the Antarctic. The contribution value of {sup 137}Cs from the operation of G.A. Siwabessy Reactor until now is undetectable. The T{sub f} of {sup 137}Cs from surface soil to Panisetum Purpureum, Setaria Spha Celata and Imperata Cylindrica grasses were 0.71 {+-} 0.14, 0.84 {+-} 0.27 and 0.81 {+-} 0.11 respectively. The results show that value of the transfer factor of {sup 137}Cs varies between cultivated and uncultivated soil and also with the soils with thick humus. (author)

  17. Error Analysis of Ia Supernova and Query on Cosmic Dark Energy ...

    Indian Academy of Sciences (India)

    2007), we find that. 3.796% of the data is an outline of 2.6σ based on the average total observational error of the distance modulus of SNIa, 0.31 m . Obviously, the distance modulus error deviates Gaussian distribution seriously, and it is not suitable to calculate the system- atic error σsys of SNIa by the χ2 check test method.

  18. Spatial distribution of heterocyclic organic matter compounds at macropore surfaces in Bt-horizons

    Science.gov (United States)

    Leue, Martin; Eckhardt, Kai-Uwe; Gerke, Horst H.; Ellerbrock, Ruth H.; Leinweber, Peter

    2017-04-01

    The illuvial Bt-horizon of Luvisols is characterized by coatings of clay and organic matter (OM) at the surfaces of cracks, biopores and inter-aggregate spaces. The OM composition of the coatings that originate from preferential transport of suspended matter in macropores determines the physico-chemical properties of the macropore surfaces. The analysis of the spatial distribution of specific OM components such as heterocyclic N-compounds (NCOMP) and benzonitrile and naphthalene (BN+NA) could enlighten the effect of macropore coatings on the transport of colloids and reactive solutes during preferential flow and on OM turnover processes in subsoils. The objective was to characterize the mm-to-cm scale spatial distribution of NCOMP and BN+NA at intact macropore surfaces from the Bt-horizons of two Luvisols developed on loess and glacial till. In material manually separated from macropore surfaces the proportions of NCOMP and BN+NA were determined by pyrolysis-field ionization mass spectrometry (Py-FIMS). These OM compounds, likely originating from combustion residues, were found increased in crack coatings and pinhole fillings but decreased in biopore walls (worm burrows and root channels). The Py-FIMS data were correlated with signals from C=O and C=C groups and with signals from O-H groups of clay minerals as determined by Fourier transform infrared spectroscopy in diffuse reflectance mode (DRIFT). Intensive signals of C15 to C17 alkanes from long-chain alkenes as main components of diesel and diesel exhaust particulates substantiated the assumption that burning residues were prominent in the subsoil OM. The spatial distribution of NCOMP and BN+NA along the macropores was predicted by partial least squares regression (PLSR) using DRIFT mapping spectra from intact surfaces and was found closely related to the distribution of crack coatings and pinholes. The results emphasize the importance of clay coatings in the subsoil to OM sorption and stabilization

  19. X-ray fractography by using synchrotron radiation source. Residual stress distribution just beneath fatigue fracture surface

    International Nuclear Information System (INIS)

    Akita, Koichi; Yoshioka, Yasuo; Suzuki, Hiroshi; Sasaki, Toshihiko

    2000-01-01

    The residual stress distributions just beneath the fatigue fracture surface were measured using synchrotron radiation with three different wavelengths, i.e., three different penetration depths. The residual stress distributions were estimated from three kinds of diffraction data by the following process. First, a temporary residual stress distribution in the depth direction is assumed. Theoretical 2θ-sin 2 ψ diagrams for each wavelength, where each has a different penetration depth, are calculated by the cosψ method developed by one of the authors. The sum total of the differences between the theoretical and experimental values of the diffraction angle in 2θ-sin 2 ψ diagrams is calculated. This total value is minimized by changing the assumed stress distribution by the quasi-Newton optimization method. Finally, optimized 2θ-sin 2 ψ diagrams for each penetration depth and detailed stress distribution are determined. The true surface residual stress is obtained from this stress distribution. No effect of load ratio R (= P min /P max ) on the residual stresses of the fatigue fracture surfaces in low-carbon steels was observed when the sin 2 ψ method was used for stress measurement. However, the residual stresses became higher with increasing R when these were measured by the proposed method. On the basis of this, the stress intensity factor range, ΔK, can be estimated from the residual stress on the fatigue fracture surface. (author)

  20. Surface ozone in China: present-day distribution and long-term changes

    Science.gov (United States)

    Xu, X.; Lin, W.; Xu, W.

    2017-12-01

    Reliable knowledge of spatio-temporal variations of surface ozone is highly needed to assess the impacts of ozone on human health, ecosystem and climate. Although regional distributions and trends of surface ozone in European and North American countries have been well characterized, little is known about the variability of surface ozone in many other countries, including China, where emissions of ozone precursors have been changing rapidly in recent decades. Here we present the first comprehensive description of present-day (2013-2017) distribution and long-term changes of surface ozone in mainland China. Recent ozone measurements from China's air quality monitoring network (AQMN) are analyzed to show present-day distributions of a few ozone exposure metrics for urban environment. Long-term measurements of ozone at six background sites, a rural site and an urban are used to study the trends of ozone in background, rural and urban air, respectively. The average levels of ozone at the AQMN sites (mainly urban) are close to those found at many European and North American sites. However, ozone at most of the sites shows very large diurnal and seasonal variations so that ozone nonattainment can occur in many cities, particularly those in the North China Plain (NCP), the south of Northeast China (NEC), the Yangtze River Delta (YRD), the Pearl River Delta (PRD), and the Sichuan Basin-Chongqing region (SCB). In all these regions, particularly in the NCP, the maximum daily 8-h average (MDA8) ozone concentration can significantly exceed the national limit (75 ppb). High annual sum of ozone means over 35 ppb (SOMO35) exist mainly in the NCP, NEC and YRD, with regional averages over 4000 ppb·d. Surface ozone has significantly increased at Waliguan (a baseline site in western China) and Shangdianzi (a background site in the NCP), and decreased in winter and spring at Longfengshan (a background site in Northeast China). No clear trend can be derived from long-term measurements

  1. Surface-water radon-222 distribution along the west-central Florida shelf

    Science.gov (United States)

    Smith, C.G.; Robbins, L.L.

    2012-01-01

    In February 2009 and August 2009, the spatial distribution of radon-222 in surface water was mapped along the west-central Florida shelf as collaboration between the Response of Florida Shelf Ecosystems to Climate Change project and a U.S. Geological Survey Mendenhall Research Fellowship project. This report summarizes the surface distribution of radon-222 from two cruises and evaluates potential physical controls on radon-222 fluxes. Radon-222 is an inert gas produced overwhelmingly in sediment and has a short half-life of 3.8 days; activities in surface water ranged between 30 and 170 becquerels per cubic meter. Overall, radon-222 activities were enriched in nearshore surface waters relative to offshore waters. Dilution in offshore waters is expected to be the cause of the low offshore activities. While thermal stratification of the water column during the August survey may explain higher radon-222 activities relative to the February survey, radon-222 activity and integrated surface-water inventories decreased exponentially from the shoreline during both cruises. By estimating radon-222 evasion by wind from nearby buoy data and accounting for internal production from dissolved radium-226, its radiogenic long-lived parent, a simple one-dimensional model was implemented to determine the role that offshore mixing, benthic influx, and decay have on the distribution of excess radon-222 inventories along the west Florida shelf. For multiple statistically based boundary condition scenarios (first quartile, median, third quartile, and maximum radon-222 inshore of 5 kilometers), the cross-shelf mixing rates and average nearshore submarine groundwater discharge (SGD) rates varied from 100.38 to 10-3.4 square kilometers per day and 0.00 to 1.70 centimeters per day, respectively. This dataset and modeling provide the first attempt to assess cross-shelf mixing and SGD on such a large spatial scale. Such estimates help scale up SGD rates that are often made at 1- to 10-meter

  2. The three-dimensional elemental distribution based on the surface topography by confocal 3D-XRF analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Longtao; Qin, Min; Wang, Kai; Peng, Shiqi; Sun, Tianxi; Liu, Zhiguo [Beijing Normal University, College of Nuclear Science and Technology, Beijing (China); Lin, Xue [Northwest University, School of Cultural Heritage, Xi' an (China)

    2016-09-15

    Confocal three-dimensional micro-X-ray fluorescence (3D-XRF) is a good surface analysis technology widely used to analyse elements and elemental distributions. However, it has rarely been applied to analyse surface topography and 3D elemental mapping in surface morphology. In this study, a surface adaptive algorithm using the progressive approximation method was designed to obtain surface topography. A series of 3D elemental mapping analyses in surface morphology were performed in laboratories to analyse painted pottery fragments from the Majiayao Culture (3300-2900 BC). To the best of our knowledge, for the first time, sample surface topography and 3D elemental mapping were simultaneously obtained. Besides, component and depth analyses were also performed using synchrotron radiation confocal 3D-XRF and tabletop confocal 3D-XRF, respectively. The depth profiles showed that the sample has a layered structure. The 3D elemental mapping showed that the red pigment, black pigment, and pottery coat contain a large amount of Fe, Mn, and Ca, respectively. From the 3D elemental mapping analyses at different depths, a 3D rendering was obtained, clearly showing the 3D distributions of the red pigment, black pigment, and pottery coat. Compared with conventional 3D scanning, this method is time-efficient for analysing 3D elemental distributions and hence especially suitable for samples with non-flat surfaces. (orig.)

  3. The three-dimensional elemental distribution based on the surface topography by confocal 3D-XRF analysis

    International Nuclear Information System (INIS)

    Yi, Longtao; Qin, Min; Wang, Kai; Peng, Shiqi; Sun, Tianxi; Liu, Zhiguo; Lin, Xue

    2016-01-01

    Confocal three-dimensional micro-X-ray fluorescence (3D-XRF) is a good surface analysis technology widely used to analyse elements and elemental distributions. However, it has rarely been applied to analyse surface topography and 3D elemental mapping in surface morphology. In this study, a surface adaptive algorithm using the progressive approximation method was designed to obtain surface topography. A series of 3D elemental mapping analyses in surface morphology were performed in laboratories to analyse painted pottery fragments from the Majiayao Culture (3300-2900 BC). To the best of our knowledge, for the first time, sample surface topography and 3D elemental mapping were simultaneously obtained. Besides, component and depth analyses were also performed using synchrotron radiation confocal 3D-XRF and tabletop confocal 3D-XRF, respectively. The depth profiles showed that the sample has a layered structure. The 3D elemental mapping showed that the red pigment, black pigment, and pottery coat contain a large amount of Fe, Mn, and Ca, respectively. From the 3D elemental mapping analyses at different depths, a 3D rendering was obtained, clearly showing the 3D distributions of the red pigment, black pigment, and pottery coat. Compared with conventional 3D scanning, this method is time-efficient for analysing 3D elemental distributions and hence especially suitable for samples with non-flat surfaces. (orig.)

  4. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  5. Technology and medication errors: impact in nursing homes.

    Science.gov (United States)

    Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis

    2014-01-01

    The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.

  6. Ultrafast all-optical switching and error-free 10 Gbit/s wavelength conversion in hybrid InP-silicon on insulator nanocavities using surface quantum wells

    Energy Technology Data Exchange (ETDEWEB)

    Bazin, Alexandre; Monnier, Paul; Beaudoin, Grégoire; Sagnes, Isabelle; Raj, Rama [Laboratoire de Photonique et de Nanostructures (CNRS UPR20), Route de Nozay, Marcoussis 91460 (France); Lenglé, Kevin; Gay, Mathilde; Bramerie, Laurent [Université Européenne de Bretagne (UEB), 5 Boulevard Laënnec, 35000 Rennes (France); CNRS-Foton Laboratory (UMR 6082), Enssat, BP 80518, 22305 Lannion Cedex (France); Braive, Rémy; Raineri, Fabrice, E-mail: fabrice.raineri@lpn.cnrs.fr [Laboratoire de Photonique et de Nanostructures (CNRS UPR20), Route de Nozay, Marcoussis 91460 (France); Université Paris Diderot, Sorbonne Paris Cité, 75207 Paris Cedex 13 (France)

    2014-01-06

    Ultrafast switching with low energies is demonstrated using InP photonic crystal nanocavities embedding InGaAs surface quantum wells heterogeneously integrated to a silicon on insulator waveguide circuitry. Thanks to the engineered enhancement of surface non radiative recombination of carriers, switching time is obtained to be as fast as 10 ps. These hybrid nanostructures are shown to be capable of achieving systems level performance by demonstrating error free wavelength conversion at 10 Gbit/s with 6 mW switching powers.

  7. Method of forecasting power distribution

    International Nuclear Information System (INIS)

    Kaneto, Kunikazu.

    1981-01-01

    Purpose: To obtain forecasting results at high accuracy by reflecting the signals from neutron detectors disposed in the reactor core on the forecasting results. Method: An on-line computer transfers, to a simulator, those process data such as temperature and flow rate for coolants in each of the sections and various measuring signals such as control rod positions from the nuclear reactor. The simulator calculates the present power distribution before the control operation. The signals from the neutron detectors at each of the positions in the reactor core are estimated from the power distribution and errors are determined based on the estimated values and the measured values to determine the smooth error distribution in the axial direction. Then, input conditions at the time to be forecast are set by a data setter. The simulator calculates the forecast power distribution after the control operation based on the set conditions. The forecast power distribution is corrected using the error distribution. (Yoshino, Y.)

  8. Estimating and localizing the algebraic and total numerical errors using flux reconstructions

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Strakoš, Z.; Vohralík, M.

    2018-01-01

    Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016

  9. Patterns of distribution of phosphomono-esterases on surfaces of demineralized bone

    DEFF Research Database (Denmark)

    Kirkeby, S; Vilmann, H

    1979-01-01

    Decalcification over short periods (5 days) with MnNa2 EDTA, MgNa2 EDTA and EGTA according to a method described in the present paper, creates sections of high quality with simultaneous good preservation of phosphomonoesterases on bone surfaces. In fact, the enzyme distribution seems to be compar...

  10. Controlling charges distribution at the surface of a single GaN nanowire by in-situ strain

    Directory of Open Access Journals (Sweden)

    Xiao Chen

    2017-08-01

    Full Text Available Effect of the strain on the charge distribution at the surface of a GaN semiconductor nanowire (NW has been investigated inside transmission electron microscope (TEM by in-situ off-axis electron holography. The outer and inner surfaces of the NW bent axially under compression of two Au electrodes were differently strained, resulting in difference of their Fermi levels. Consequently, the free electrons flow from the high Fermi level to the low level until the two Fermi levels aligned in a line. The potential distributions induced by charge redistribution in the two vacuum sides of the bent NW were examined respectively, and the opposite nature of the bounded charges on the outer and inner surfaces of the bent NW was identified. The results provide experimental evidence that the charge distribution at the surfaces of a single GaN NW can be controlled by different strains created along the NW.

  11. Radioactivity distribution measurement of various natural material surfaces with imaging plate

    International Nuclear Information System (INIS)

    Mori, C.; Suzuki, T.; Koido, S.; Uritani, A.; Yanagida, K.; Wu, Y.; Nishizawa, K.

    1996-01-01

    Distribution images of natural radioactivity in natural materials such as vegetables were obtained by using Imaging Platc. In ssuch cases, it is necessary to reduce background radiation intensity by one order or more. Graded shielding is very important. Espacially, the innermost surface of a shielding box sshould be covered with acrylic rein plate. We obtained natural radioactivity distribution images of vegetable, sea food, mea etc. Most β-rays emitted from 40 K print the radioactivity distribution image. Comparison between γ-ray intensity of KCL solution measured with HPGe detector and that of natural material specimen gave the radioactivity around 0.06- 0.04Bq/g depending on the kind and the part of specimens. (author). 6 refs., 5 figs., 1 tab

  12. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  13. Tuning Nanocrystal Surface Depletion by Controlling Dopant Distribution as a Route Toward Enhanced Film Conductivity

    Science.gov (United States)

    Staller, Corey M.; Robinson, Zachary L.; Agrawal, Ankit; Gibbs, Stephen L.; Greenberg, Benjamin L.; Lounis, Sebastien D.; Kortshagen, Uwe R.; Milliron, Delia J.

    2018-05-01

    Electron conduction through bare metal oxide nanocrystal (NC) films is hindered by surface depletion regions resulting from the presence of surface states. We control the radial dopant distribution in tin-doped indium oxide (ITO) NCs as a means to manipulate the NC depletion width. We find in films of ITO NCs of equal overall dopant concentration that those with dopant-enriched surfaces show decreased depletion width and increased conductivity. Variable temperature conductivity data shows electron localization length increases and associated depletion width decreases monotonically with increased density of dopants near the NC surface. We calculate band profiles for NCs of differing radial dopant distributions and, in agreement with variable temperature conductivity fits, find NCs with dopant-enriched surfaces have narrower depletion widths and longer localization lengths than those with dopant-enriched cores. Following amelioration of NC surface depletion by atomic layer deposition of alumina, all films of equal overall dopant concentration have similar conductivity. Variable temperature conductivity measurements on alumina-capped films indicate all films behave as granular metals. Herein, we conclude that dopant-enriched surfaces decrease the near-surface depletion region, which directly increases the electron localization length and conductivity of NC films.

  14. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  15. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  16. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have

  17. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    International Nuclear Information System (INIS)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  18. Error Mitigation for Short-Depth Quantum Circuits

    Science.gov (United States)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  19. Over-Distribution in Source Memory

    Science.gov (United States)

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  20. Transmission comb of a distributed Bragg reflector with two surface dielectric gratings

    KAUST Repository

    Zhao, Xiaobo; Zhang, Yongyou; Zhang, Qingyun; Zou, Bingsuo; Schwingenschlö gl, Udo

    2016-01-01

    The transmission behaviour of a distributed Bragg reector (DBR) with surface dielectric gratings on top and bottom is studied. The transmission shows a comb-like spectrum in the DBR band gap, which is explained in the Fano picture. The number

  1. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  2. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    Science.gov (United States)

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  3. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert K.

    2005-01-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errors of σ = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with Σ = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for σ = Σ = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D 98 ), clinical target volume (CTV) D 90 , nodes D 90 , cord D 2 , and parotid D 50 and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error σ exceeded 3 mm. Simulated systematic setup errors with Σ = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a Σ = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error

  4. Refractive error magnitude and variability: Relation to age.

    Science.gov (United States)

    Irving, Elizabeth L; Machan, Carolyn M; Lam, Sharon; Hrynchak, Patricia K; Lillakas, Linda

    2018-03-19

    To investigate mean ocular refraction (MOR) and astigmatism, over the human age range and compare severity of refractive error to earlier studies from clinical populations having large age ranges. For this descriptive study patient age, refractive error and history of surgery affecting refraction were abstracted from the Waterloo Eye Study database (WatES). Average MOR, standard deviation of MOR and astigmatism were assessed in relation to age. Refractive distributions for developmental age groups were determined. MOR standard deviation relative to average MOR was evaluated. Data from earlier clinically based studies with similar age ranges were compared to WatES. Right eye refractive errors were available for 5933 patients with no history of surgery affecting refraction. Average MOR varied with age. Children <1 yr of age were the most hyperopic (+1.79D) and the highest magnitude of myopia was found at 27yrs (-2.86D). MOR distributions were leptokurtic, and negatively skewed. The mode varied with age group. MOR variability increased with increasing myopia. Average astigmatism increased gradually to age 60 after which it increased at a faster rate. By 85+ years it was 1.25D. J 0 power vector became increasingly negative with age. J 45 power vector values remained close to zero but variability increased at approximately 70 years. In relation to comparable earlier studies, WatES data were most myopic. Mean ocular refraction and refractive error distribution vary with age. The highest magnitude of myopia is found in young adults. Similar to prevalence, the severity of myopia also appears to have increased since 1931. Copyright © 2018 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  5. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-09-11

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.

  6. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    Science.gov (United States)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  7. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  8. Multiresolution molecular mechanics: Surface effects in nanoscale materials

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Qingcheng, E-mail: qiy9@pitt.edu; To, Albert C., E-mail: albertto@pitt.edu

    2017-05-01

    Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), ) is applied to capture surface effect for nanosized structures by designing a surface summation rule SR{sup S} within the framework of MMM. Combined with previously proposed bulk summation rule SR{sup B}, the MMM summation rule SR{sup MMM} is completed. SR{sup S} and SR{sup B} are consistently formed within SR{sup MMM} for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SR{sup MMM} lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SR{sup S} and SR{sup B} are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SR{sup MMM} accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SR{sup MMM} with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SR{sup MMM} that is analogous to numerical integration error with quadrature rule in FEM is very small. - Highlights:

  9. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan; Hart, Jeffrey D.; Janicki, Ryan; Carroll, Raymond J.

    2010-01-01

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal

  10. Equating error in observed-score equating

    NARCIS (Netherlands)

    van der Linden, Willem J.

    2006-01-01

    Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and population of test takers. But it is argued that if the goal of equating is to adjust the scores of

  11. Software errors and complexity: An empirical investigation

    Science.gov (United States)

    Basili, Victor R.; Perricone, Berry T.

    1983-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  12. ERROR ANALYSIS IN THE TRAVEL WRITING MADE BY THE STUDENTS OF ENGLISH STUDY PROGRAM

    Directory of Open Access Journals (Sweden)

    Vika Agustina

    2015-05-01

    Full Text Available This study was conducted to identify the kinds of errors in surface strategy taxonomy and to know the dominant type of errors made by the fifth semester students of English Department of one State University in Malang-Indonesia in producing their travel writing. The type of research of this study is document analysis since it analyses written materials, in this case travel writing texts. The analysis finds that the grammatical errors made by the students based on surface strategy taxonomy theory consist of four types. They are (1 omission, (2 addition, (3 misformation and (4 misordering. The most frequent errors occuring in misformation are in the use of tense form. Secondly, the errors are in omission of noun/verb inflection. The next error, there are many clauses that contain unnecessary phrase added there.

  13. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  14. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  15. Bio-Optical Data Assimilation With Observational Error Covariance Derived From an Ensemble of Satellite Images

    Science.gov (United States)

    Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter

    2018-03-01

    An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.

  16. Codon Distribution in Error-Detecting Circular Codes

    Directory of Open Access Journals (Sweden)

    Elena Fimmel

    2016-03-01

    Full Text Available In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising result, it is shown that the codons can be separated into very few classes (three, or five, or six with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes.

  17. Codon Distribution in Error-Detecting Circular Codes.

    Science.gov (United States)

    Fimmel, Elena; Strüngmann, Lutz

    2016-03-15

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes.

  18. Nuclear momentum distribution and potential energy surface in hexagonal ice

    Science.gov (United States)

    Lin, Lin; Morrone, Joseph; Car, Roberto; Parrinello, Michele

    2011-03-01

    The proton momentum distribution in ice Ih has been recently measured by deep inelastic neutron scattering and calculated from open path integral Car-Parrinello simulation. Here we report a detailed investigation of the relation between momentum distribution and potential energy surface based on both experiment and simulation results. The potential experienced by the proton is largely harmonic and characterized by 3 principal frequencies, which can be associated to weighted averages of phonon frequencies via lattice dynamics calculations. This approach also allows us to examine the importance of quantum effects on the dynamics of the oxygen nuclei close to the melting temperature. Finally we quantify the anharmonicity that is present in the potential acting on the protons. This work is supported by NSF and by DOE.

  19. Ride control of surface effect ships using distributed control

    Directory of Open Access Journals (Sweden)

    Asgeir J. Sørensen

    1994-04-01

    Full Text Available A ride control system for active damping of heave and pitch accelerations of Surface Effect Ships (SES is presented. It is demonstrated that distributed effects that are due to a spatially varying pressure in the air cushion result in significant vertical vibrations in low and moderate sea states. In order to achieve a high quality human comfort and crew workability it is necessary to reduce these vibrations using a control system which accounts for distributed effects due to spatial pressure variations in the air cushion. A mathematical model of the process is presented, and collocated sensor and actuator pairs are used. The process stability is ensured using a controller with appropriate passivity properties. Sensor and actuator location is also discussed. The performance of the ride control system is shown by power spectra of the vertical accelerations obtained from full scale experiments with a 35 m SES.

  20. Mirror surface metrology and polishing for AXAF/TMA

    International Nuclear Information System (INIS)

    Slomba, A.; Babish, R.; Glenn, P.

    1985-01-01

    The achievement of the derived goals for mirror surface quality on the Advanced X-ray Astrophysics Facility (AXAF), Technology Mirror Assembly (TMA) required a combination of state-of-the-art metrology and polishing techniques. In this paper, the authors summarize the derived goals and cover the main facets of the various metrology instruments employed, as well as the philosophy and technique used in the polishing work. In addition, they show how progress was measured against the goals, using the detailed error budget for surface errors and a mathematical model for performance prediction. The metrology instruments represented a considerable advance on the state-of-the-art and fully satisfied the error budget goals for the various surface errors. They were capable of measuring the surface errors over a large range of spatial periods, from low-frequency figure errors to microroughness. The polishing was accomplished with a computer-controlled process, guided by the combined data from various metrology instruments. This process was also tailored to reduce the surface errors over the full range of spatial periods

  1. Wind speed errors for LIDARs and SODARs in complex terrain

    International Nuclear Information System (INIS)

    Bradley, S

    2008-01-01

    All commercial LIDARs and SODARs are monostatic and hence sample distributed volumes to construct wind vector components. We use an analytic potential flow model to estimate errors arising for a range of LIDAR and SODAR configurations on hills and escarpments. Wind speed errors peak at a height relevant to wind turbines and can be typically 20%

  2. Wind speed errors for LIDARs and SODARs in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, S [Physics Department, The University of Auckland, Private Bag 92019, Auckland (New Zealand) and School of Computing, Science and Engineering, University of Salford, M5 4WT (United Kingdom)], E-mail: s.bradley@auckland.ac.nz

    2008-05-01

    All commercial LIDARs and SODARs are monostatic and hence sample distributed volumes to construct wind vector components. We use an analytic potential flow model to estimate errors arising for a range of LIDAR and SODAR configurations on hills and escarpments. Wind speed errors peak at a height relevant to wind turbines and can be typically 20%.

  3. Error bounds for molecular Hamiltonians inverted from experimental data

    International Nuclear Information System (INIS)

    Geremia, J.M.; Rabitz, Herschel

    2003-01-01

    Inverting experimental data provides a powerful technique for obtaining information about molecular Hamiltonians. However, rigorously quantifying how laboratory error propagates through the inversion algorithm has always presented a challenge. In this paper, we develop an inversion algorithm that realistically treats experimental error. It propagates the distribution of observed laboratory measurements into a family of Hamiltonians that are statistically consistent with the distribution of the data. This algorithm is built upon the formalism of map-facilitated inversion to alleviate computational expense and permit the use of powerful nonlinear optimization algorithms. Its capabilities are demonstrated by identifying inversion families for the X 1 Σ g + and a 3 Σ u + states of Na 2 that are consistent with the laboratory data

  4. Finite difference applied to the reconstruction method of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2016-01-01

    Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.

  5. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  6. Evaluation of protein adsorption onto a polyurethane nanofiber surface having different segment distributions

    Energy Technology Data Exchange (ETDEWEB)

    Morita, Yuko; Koizumi, Gaku [Frontier Fiber Technology and Science, Graduate School of Engineering, University of Fukui (Japan); Sakamoto, Hiroaki, E-mail: hi-saka@u-fukui.ac.jp [Tenure-Track Program for Innovative Research, University of Fukui (Japan); Suye, Shin-ichiro [Frontier Fiber Technology and Science, Graduate School of Engineering, University of Fukui (Japan)

    2017-02-01

    Electrospinning is well known to be an effective method for fabricating polymeric nanofibers with a diameter of several hundred nanometers. Recently, the molecular-level orientation within nanofibers has attracted particular attention. Previously, we used atomic force microscopy to visualize the phase separation between soft and hard segments of a polyurethane (PU) nanofiber surface prepared by electrospinning. The unstretched PU nanofibers exhibited irregularly distributed hard segments, whereas hard segments of stretched nanofibers prepared with a high-speed collector exhibited periodic structures along the long-axis direction. PU was originally used to inhibit protein adsorption, but because the surface segment distribution was changed in the stretched nanofiber, here, we hypothesized that the protein adsorption property on the stretched nanofiber might be affected. We investigated protein adsorption onto PU nanofibers to elucidate the effects of segment distribution on the surface properties of PU nanofibers. The amount of adsorbed protein on stretched PU nanofibers was increased compared with that of unstretched nanofibers. These results indicate that the hard segment alignment on stretched PU nanofibers mediated protein adsorption. It is therefore expected that the amount of protein adsorption can be controlled by rotation of the collector. - Highlights: • The hard segments of stretched PU nanofibers exhibit periodic structures. • The adsorbed protein on stretched PU nanofibers was increased compared with PU film. • The hard segment alignment on stretched PU nanofibers mediated protein adsorption.

  7. Heartbeat-based error diagnosis framework for distributed embedded systems

    Science.gov (United States)

    Mishra, Swagat; Khilar, Pabitra Mohan

    2012-01-01

    Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.

  8. A Study of Land Surface Temperature Retrieval and Thermal Environment Distribution Based on Landsat-8 in Jinan City

    Science.gov (United States)

    Dong, Fang; Chen, Jian; Yang, Fan

    2018-01-01

    Based on the medium resolution Landsat 8 OLI/TIRS, the temperature distribution in four seasons of urban area in Jinan City was obtained by using atmospheric correction method for the retrieval of land surface temperature. Quantitative analysis of the spatio-temporal distribution characteristics, development trend of urban thermal environment, the seasonal variation and the relationship between surface temperature and normalized difference vegetation index (NDVI) was studied. The results show that the distribution of high temperature areas is concentrated in Jinan, and there is a tendency to expand from east to west, revealing a negative correlation between land surface temperature distribution and NDVI. So as to provide theoretical references and scientific basis of improving the ecological environment of Jinan City, strengthening scientific planning and making overall plan addressing climate change.

  9. Effects of variable transformations on errors in FORM results

    International Nuclear Information System (INIS)

    Qin Quan; Lin Daojin; Mei Gang; Chen Hao

    2006-01-01

    On the basis of studies on second partial derivatives of the variable transformation functions for nine different non-normal variables the paper comprehensively discusses the effects of the transformation on FORM results and shows that senses and values of the errors in FORM results depend on distributions of the basic variables, whether resistances or actions basic variables represent, and the design point locations in the standard normal space. The transformations of the exponential or Gamma resistance variables can generate +24% errors in the FORM failure probability, and the transformation of Frechet action variables could generate -31% errors

  10. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  11. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    Science.gov (United States)

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  12. Analyzing Software Errors in Safety-Critical Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1994-01-01

    This paper analyzes the root causes of safty-related software faults identified as potentially hazardous to the system are distributed somewhat differently over the set of possible error causes than non-safety-related software faults.

  13. Analysis and improvement of gas turbine blade temperature measurement error

    International Nuclear Information System (INIS)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-01-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed. (paper)

  14. Analysis and improvement of gas turbine blade temperature measurement error

    Science.gov (United States)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-10-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.

  15. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  16. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  18. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  19. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy.

    Science.gov (United States)

    Cohen, E A K; Ober, R J

    2013-12-15

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data.

  20. A lattice Boltzmann simulation of coalescence-induced droplet jumping on superhydrophobic surfaces with randomly distributed structures

    Science.gov (United States)

    Zhang, Li-Zhi; Yuan, Wu-Zhi

    2018-04-01

    The motion of coalescence-induced condensate droplets on superhydrophobic surface (SHS) has attracted increasing attention in energy-related applications. Previous researches were focused on regularly rough surfaces. Here a new approach, a mesoscale lattice Boltzmann method (LBM), is proposed and used to model the dynamic behavior of coalescence-induced droplet jumping on SHS with randomly distributed rough structures. A Fast Fourier Transformation (FFT) method is used to generate non-Gaussian randomly distributed rough surfaces with the skewness (Sk), kurtosis (K) and root mean square (Rq) obtained from real surfaces. Three typical spreading states of coalesced droplets are observed through LBM modeling on various rough surfaces, which are found to significantly influence the jumping ability of coalesced droplet. The coalesced droplets spreading in Cassie state or in composite state will jump off the rough surfaces, while the ones spreading in Wenzel state would eventually remain on the rough surfaces. It is demonstrated that the rough surfaces with smaller Sks, larger Rqs and a K at 3.0 are beneficial to coalescence-induced droplet jumping. The new approach gives more detailed insights into the design of SHS.

  1. A climatology of visible surface reflectance spectra

    International Nuclear Information System (INIS)

    Zoogman, Peter; Liu, Xiong; Chance, Kelly; Sun, Qingsong; Schaaf, Crystal; Mahr, Tobias; Wagner, Thomas

    2016-01-01

    We present a high spectral resolution climatology of visible surface reflectance as a function of wavelength for use in satellite measurements of ozone and other atmospheric species. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) instrument is planned to measure backscattered solar radiation in the 290–740 nm range, including the ultraviolet and visible Chappuis ozone bands. Observation in the weak Chappuis band takes advantage of the relative transparency of the atmosphere in the visible to achieve sensitivity to near-surface ozone. However, due to the weakness of the ozone absorption features this measurement is more sensitive to errors in visible surface reflectance, which is highly variable. We utilize reflectance measurements of individual plant, man-made, and other surface types to calculate the primary modes of variability of visible surface reflectance at a high spectral resolution, comparable to that of TEMPO (0.6 nm). Using the Moderate-resolution Imaging Spectroradiometer (MODIS) Bidirection Reflectance Distribution Function (BRDF)/albedo product and our derived primary modes we construct a high spatial resolution climatology of wavelength-dependent surface reflectance over all viewing scenes and geometries. The Global Ozone Monitoring Experiment–2 (GOME-2) Lambertian Equivalent Reflectance (LER) product provides complementary information over water and snow scenes. Preliminary results using this approach in multispectral ultraviolet+visible ozone retrievals from the GOME-2 instrument show significant improvement to the fitting residuals over vegetated scenes. - Highlights: • Our goals was visible surface reflectance for satellite trace gas measurements. • Captured the range of surface reflectance spectra through EOF analysis. • Used satellite surface reflectance products for each given scene to anchor EOFs. • Generated a climatology of time/geometry dependent surface reflectance spectra. • Demonstrated potential to

  2. On the sensitivity of numerical weather prediction to remotely sensed marine surface wind data - A simulation study

    Science.gov (United States)

    Cane, M. A.; Cardone, V. J.; Halem, M.; Halberstam, I.

    1981-01-01

    The reported investigation has the objective to assess the potential impact on numerical weather prediction (NWP) of remotely sensed surface wind data. Other investigations conducted with similar objectives have not been satisfactory in connection with a use of procedures providing an unrealistic distribution of initial errors. In the current study, care has been taken to duplicate the actual distribution of information in the conventional observing system, thus shifting the emphasis from accuracy of the data to the data coverage. It is pointed out that this is an important consideration in assessing satellite observing systems since experience with sounder data has shown that improvements in forecasts due to satellite-derived information is due less to a general error reduction than to the ability to fill data-sparse regions. The reported study concentrates on the evaluation of the observing system simulation experimental design and on the assessment of the potential of remotely sensed marine surface wind data.

  3. Analysis of the "naming game" with learning errors in communications.

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  4. The stochastic distribution of available coefficient of friction for human locomotion of five different floor surfaces.

    Science.gov (United States)

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2014-05-01

    The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Error floor behavior study of LDPC codes for concatenated codes design

    Science.gov (United States)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  6. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  7. Evaluation of SCS-CN method using a fully distributed physically based coupled surface-subsurface flow model

    Science.gov (United States)

    Shokri, Ali

    2017-04-01

    The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.

  8. Calculation of Pressure Distribution at Rotary Body Surface with the Vortex Element Method

    Directory of Open Access Journals (Sweden)

    S. A. Dergachev

    2014-01-01

    Full Text Available Vortex element method allows to simulate unsteady hydrodynamic processes in incompressible environment, taking into account the evolution of the vortex sheet, including taking into account the deformation or moving of the body or part of construction.For the calculation of the hydrodynamic characteristics of the method based on vortex element software package was developed MVE3D. Vortex element (VE in program is symmetrical Vorton-cut. For satisfying the boundary conditions at the surface used closed frame of vortons.With this software system modeled incompressible flow around a cylindrical body protection elongation L / D = 13 with a front spherical blunt with the angle of attack of 10 °. We analyzed the distribution of the pressure coefficient on the body surface of the top and bottom forming.The calculate results were compared with known Results of experiment.Considered design schemes with different number of Vorton framework. Also varied radius of VE. Calculation make possible to establish the degree of sampling surface needed to produce close to experiment results. It has been shown that an adequate reproducing the pressure distribution in the transition region spherical cylindrical surface, on the windward side requires a high degree of sampling.Based on these results Can be possible need to improve on the design scheme of body's surface, allowing more accurate to describe the flow vorticity in areas with abrupt changes of geometry streamlined body.

  9. Classification error of the thresholded independence rule

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Fenger-Grøn, Morten; Jensen, Jens Ledet

    We consider classification in the situation of two groups with normally distributed data in the ‘large p small n’ framework. To counterbalance the high number of variables we consider the thresholded independence rule. An upper bound on the classification error is established which is taylored...

  10. Surface complexation of selenite on goethite: MO/DFT geometry and charge distribution

    NARCIS (Netherlands)

    Hiemstra, T.; Rietra, R.P.J.J.; Riemsdijk, van W.H.

    2007-01-01

    The adsorption of selenite on goethite (alpha-FeOOH) has been analyzed with the charge distribution (CD) and the multi-site surface complexation (MUSIC) model being combined with an extended Stem (ES) layer model option. The geometry of a set of different types of hydrated iron-selenite complexes

  11. Estimation of the measurement error of eccentrically installed orifice plates

    Energy Technology Data Exchange (ETDEWEB)

    Barton, Neil; Hodgkinson, Edwin; Reader-Harris, Michael

    2005-07-01

    The presentation discusses methods for simulation and estimation of flow measurement errors. The main conclusions are: Computational Fluid Dynamics (CFD) simulation methods and published test measurements have been used to estimate the error of a metering system over a period when its orifice plates were eccentric and when leaking O-rings allowed some gas to bypass the meter. It was found that plate eccentricity effects would result in errors of between -2% and -3% for individual meters. Validation against test data suggests that these estimates of error should be within 1% of the actual error, but it is unclear whether the simulations over-estimate or under-estimate the error. Simulations were also run to assess how leakage at the periphery affects the metering error. Various alternative leakage scenarios were modelled and it was found that the leakage rate has an effect on the error, but that the leakage distribution does not. Correction factors, based on the CFD results, were then used to predict the system's mis-measurement over a three-year period (tk)

  12. Edge maps: Representing flow with bounded error

    KAUST Repository

    Bhatia, Harsh

    2011-03-01

    Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Many analysis techniques rely on computing streamlines, a task often hampered by numerical instabilities. Approaches that ignore the resulting errors can lead to inconsistencies that may produce unreliable visualizations and ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with linear maps defined on its boundary. This representation, called edge maps, is equivalent to computing all possible streamlines at a user defined error threshold. In spite of this error, all the streamlines computed using edge maps will be pairwise disjoint. Furthermore, our representation stores the error explicitly, and thus can be used to produce more informative visualizations. Given a piecewise-linear interpolated vector field, a recent result [15] shows that there are only 23 possible map classes for a triangle, permitting a concise description of flow behaviors. This work describes the details of computing edge maps, provides techniques to quantify and refine edge map error, and gives qualitative and visual comparisons to more traditional techniques. © 2011 IEEE.

  13. Surface topography effects on energy-resolved polar angular distributions of electrons induced in heavy ion-Al collisions: experiments and models

    International Nuclear Information System (INIS)

    Mischler, J.; Banouni, M.; Banazeth, C.; Negre, M.; Benazeth, N.

    1986-01-01

    The influence of the surface topography on the polar angular distributions of secondary electrons emitted in Ar + (and Xe - )-Al collisions was studied. After each set of experiments, the surface target was viewed by scanning electron microscope. Under normal incidence, continuum background and Al L 23 VV Auger electron polar angular distributions were not modified by the topography and closely followed a cosine law. For Al L 23 MM Auger electrons, experimental angular distributions as a function of the emission polar angle theta, either were near a constant law or followed a decreasing law depending on the irradiation conditions. The N(theta) curves calculated from the models showed that the isotropic angular distributions obtained for electrons generated outside the crystal from a flat surface could be strongly modified by the surface topography. (author)

  14. Speech error and tip-of-the-tongue diary for mobile devices

    Directory of Open Access Journals (Sweden)

    Michael S Vitevitch

    2015-08-01

    Full Text Available Collections of various types of speech errors have increased our understanding of the acquisition, production, and perception of language. Although such collections of naturally occurring language errors are invaluable for a number of reasons, the process of collecting various types of speech errors presents many challenges to the researcher interested in building such a collection, among them a significant investment of time and effort to obtain a sufficient number of examples to enable statistical analysis. Here we describe a freely accessible website (http://spedi.ku.edu that helps users document slips of the tongue, slips of the ear, and tip of the tongue states that they experience firsthand or observe in others. The documented errors are amassed, and made available for other users to analyze, thereby distributing the time and effort involved in collecting errors across a large number of individuals instead of saddling the lone researcher, and facilitating distribution of the collection to other researchers. This approach also addresses some issues related to data curation that hampered previous error collections, and enables the collection to continue to grow over a longer period of time than previous collections. Finally, this web-based tool creates an opportunity for language scientists to engage in outreach efforts to increase the understanding of language disorders and research in the general public.

  15. Remote Sensing of Atlanta's Urban Sprawl and the Distribution of Land Cover and Surface Temperature

    Science.gov (United States)

    Laymon, Charles A.; Estes, Maurice G., Jr.; Quattrochi, Dale A.; Goodman, H. Michael (Technical Monitor)

    2001-01-01

    Between 1973 and 1992, an average of 20 ha of forest was lost each day to urban expansion of Atlanta, Georgia. Urban surfaces have very different thermal properties than natural surfaces-storing solar energy throughout the day and continuing to release it as sensible heat well after sunset. The resulting heat island effect serves as catalysts for chemical reactions from vehicular exhaust and industrialization leading to a deterioration in air quality. In this study, high spatial resolution multispectral remote sensing data has been used to characterize the type, thermal properties, and distribution of land surface materials throughout the Atlanta metropolitan area. Ten-meter data were acquired with the Advanced Thermal and Land Applications Sensor (ATLAS) on May 11 and 12, 1997. ATLAS is a 15-channel multispectral scanner that incorporates the Landsat TM bands with additional bands in the middle reflective infrared and thermal infrared range. The high spatial resolution permitted discrimination of discrete surface types (e.g., concrete, asphalt), individual structures (e.g., buildings, houses) and their associated thermal characteristics. There is a strong temperature contrast between vegetation and anthropomorphic features. Vegetation has a modal temperature at about 20 C, whereas asphalt shingles, pavement, and buildings have a modal temperature of about 39 C. Broad-leaf vegetation classes are indistinguishable on a thermal basis alone. There is slightly more variability (+/-5 C) among the urban surfaces. Grasses, mixed vegetation and mixed urban surfaces are intermediate in temperature and are characterized by broader temperature distributions with modes of about 29 C. Thermal maps serve as a basis for understanding the distribution of "hotspots", i.e., how landscape features and urban fabric contribute the most heat to the lower atmosphere.

  16. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  17. Comparison of two surface temperature measurement using thermocouples and infrared camera

    Directory of Open Access Journals (Sweden)

    Michalski Dariusz

    2017-01-01

    Full Text Available This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  18. Fourier and granulometry methods on 3D images of soil surfaces for evaluating soil aggregate size distribution

    DEFF Research Database (Denmark)

    Jensen, T.; Green, O.; Munkholm, Lars Juhl

    2016-01-01

    The goal of this research is to present and compare two methods for evaluating soil aggregate size distribution based on high resolution 3D images of the soil surface. The methods for analyzing the images are discrete Fourier transform and granulometry. The results of these methods correlate...... with a measured weight distribution of the soil aggregates. The results have shown that it is possible to distinguish between the cultivated and the uncultivated soil surface. A sensor system suitable for capturing in-situ high resolution 3D images of the soil surface is also described. This sensor system...

  19. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  20. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross

  1. Recommended values of clean metal surface work functions

    International Nuclear Information System (INIS)

    Derry, Gregory N.; Kern, Megan E.; Worth, Eli H.

    2015-01-01

    A critical review of the experimental literature for measurements of the work functions of clean metal surfaces of single-crystals is presented. The tables presented include all results found for low-index crystal faces except cases that were known to be contaminated surfaces. These results are used to construct a recommended value of the work function for each surface examined, along with an uncertainty estimate for that value. The uncertainties are based in part on the error distribution for all measured work functions in the literature, which is included here. The metals included in this review are silver (Ag), aluminum (Al), gold (Au), copper (Cu), iron (Fe), iridium (Ir), molybdenum (Mo), niobium (Nb), nickel (Ni), palladium (Pd), platinum (Pt), rhodium (Rh), ruthenium (Ru), tantalum (Ta), and tungsten (W)

  2. Recommended values of clean metal surface work functions

    Energy Technology Data Exchange (ETDEWEB)

    Derry, Gregory N., E-mail: gderry@loyola.edu; Kern, Megan E.; Worth, Eli H. [Department of Physics, Loyola University Maryland, 4501 N. Charles St., Baltimore, Maryland 21210 (United States)

    2015-11-15

    A critical review of the experimental literature for measurements of the work functions of clean metal surfaces of single-crystals is presented. The tables presented include all results found for low-index crystal faces except cases that were known to be contaminated surfaces. These results are used to construct a recommended value of the work function for each surface examined, along with an uncertainty estimate for that value. The uncertainties are based in part on the error distribution for all measured work functions in the literature, which is included here. The metals included in this review are silver (Ag), aluminum (Al), gold (Au), copper (Cu), iron (Fe), iridium (Ir), molybdenum (Mo), niobium (Nb), nickel (Ni), palladium (Pd), platinum (Pt), rhodium (Rh), ruthenium (Ru), tantalum (Ta), and tungsten (W)

  3. Modeling of edge effect in subaperture tool influence functions of computer controlled optical surfacing.

    Science.gov (United States)

    Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min

    2016-12-20

    Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.

  4. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  5. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    Science.gov (United States)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H.-Y.; Ahlgrimm, M.; Bazile, E.; Berg, L. K.; Cheng, A.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Lee, W.-S.; Liu, Y.; Mellul, L.; Merryfield, W. J.; Qian, Y.; Roehrig, R.; Wang, Y.-C.; Xie, S.; Xu, K.-M.; Zhang, C.; Klein, S.; Petch, J.

    2018-03-01

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally, a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.

  6. Consonant acquisition: a first approach to the distribution of errors in four positions in the word

    Directory of Open Access Journals (Sweden)

    Silvia Llach

    2012-12-01

    Full Text Available The goal of this study is to describe the behavior of errors in two types of onsets (initial and intervocalic and two types of codas (in the middle and end of the word in order to determine if any of these positions are more prone to specific types of errors than the others.We have looked into the errors that are frequently produced in these four contexts during the acquisition of consonant sounds in the Catalan language. The data were taken from a study on the acquisition of consonants in Catalan, carried out on 90 children between the ages of 3 and 5 years from several kindergarten schools. The results do show that there are characteristic errors depending on the position within the word.

  7. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  8. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    Science.gov (United States)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  9. Tracking and shape errors measurement of concentrating heliostats

    Science.gov (United States)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-09-01

    In solar tower power plants, factors such as tracking accuracy, facets misalignment and surface shape errors of concentrating heliostats are of prime importance on the efficiency of the system. At industrial scale, one critical issue is the time and effort required to adjust the different mirrors of the faceted heliostats, which could take several months using current techniques. Thus, methods enabling quick adjustment of a field with a huge number of heliostats are essential for the rise of solar tower technology. In this communication is described a new method for heliostat characterization that makes use of four cameras located near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. From knowledge of a measured sun profile, data processing of the acquired images allows reconstructing the slope and shape errors of the heliostats, including tracking and canting errors. The mathematical basis of this shape reconstruction process is explained comprehensively. Numerical simulations demonstrate that the measurement accuracy of this "backward-gazing method" is compliant with the requirements of solar concentrating optics. Finally, we present our first experimental results obtained at the THEMIS experimental solar tower plant in Targasonne, France.

  10. Study of temperature distribution of pipes heated by moving rectangular gauss distribution heat source. Development of pipe outer surface irradiated laser stress improvement process (L-SIP)

    International Nuclear Information System (INIS)

    Ohta, Takahiro; Kamo, Kazuhiko; Asada, Seiji; Terasaki, Toshio

    2009-01-01

    The new process called L-SIP (outer surface irradiated Laser Stress Improvement Process) is developed to improve the tensile residual stress of the inner surface near the butt welded joints of pipes in the compression stress. The temperature gradient occurs in the thickness of pipes in heating the outer surface rapidly by laser beam. By the thermal expansion difference between the inner surface and the outer surface, the compression stress occurs near the inner surface of pipes. In this paper, the theoretical equation for the temperature distributions of pipes heated by moving rectangular Gauss distribution heat source on the outer surface is derived. The temperature histories of pipes calculated by theoretical equation agree well with FEM analysis results. According to the theoretical equation, the controlling parameters of temperature distributions and histories are q/2a y , vh, a x /h and a y /h, where q is total heat input, a y is heat source length in the axial direction, a x is Gaussian radius of heat source in the hoop direction, ν is moving velocity, and h is thickness of the pipe. The essential variables for L-SIP, which are defined on the basis of the measured temperature histories on the outer surface of the pipe, are Tmax, F 0 =kτ 0 /h 2 , vh, W Q and L Q , where Tmax is maximum temperature on the monitor point of the outer surface, k is thermal diffusivity coefficient, τ 0 is the temperature rise time from 100degC to maximum temperature on the monitor point of the outer surface, W Q is τ 0 x ν, and L Q is the uniform temperature length in the axial direction. It is verified that the essential variables for L-SIP match the controlling parameters by the theoretical equation. (author)

  11. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    Science.gov (United States)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  12. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  13. SOERP, Statistics and 2. Order Error Propagation for Function of Random Variables

    International Nuclear Information System (INIS)

    Cox, N. D.; Miller, C. F.

    1985-01-01

    1 - Description of problem or function: SOERP computes second-order error propagation equations for the first four moments of a function of independently distributed random variables. SOERP was written for a rigorous second-order error propagation of any function which may be expanded in a multivariable Taylor series, the input variables being independently distributed. The required input consists of numbers directly related to the partial derivatives of the function, evaluated at the nominal values of the input variables and the central moments of the input variables from the second through the eighth. 2 - Method of solution: The development of equations for computing the propagation of errors begins by expressing the function of random variables in a multivariable Taylor series expansion. The Taylor series expansion is then truncated, and statistical operations are applied to the series in order to obtain equations for the moments (about the origin) of the distribution of the computed value. If the Taylor series is truncated after powers of two, the procedure produces second-order error propagation equations. 3 - Restrictions on the complexity of the problem: The maximum number of component variables allowed is 30. The IBM version will only process one set of input data per run

  14. Detecting the Water-soluble Chloride Distribution of Cement Paste in a High-precision Way.

    Science.gov (United States)

    Chang, Honglei; Mu, Song

    2017-11-21

    To improve the accuracy of the chloride distribution along the depth of cement paste under cyclic wet-dry conditions, a new method is proposed to obtain a high-precision chloride profile. Firstly, paste specimens are molded, cured, and exposed to cyclic wet-dry conditions. Then, powder samples at different specimen depths are grinded when the exposure age is reached. Finally, the water-soluble chloride content is detected using a silver nitrate titration method, and chloride profiles are plotted. The key to improving the accuracy of the chloride distribution along the depth is to exclude the error in the powderization, which is the most critical step for testing the distribution of chloride. Based on the above concept, the grinding method in this protocol can be used to grind powder samples automatically layer by layer from the surface inward, and it should be noted that a very thin grinding thickness (less than 0.5 mm) with a minimum error less than 0.04 mm can be obtained. The chloride profile obtained by this method better reflects the chloride distribution in specimens, which helps researchers to capture the distribution features that are often overlooked. Furthermore, this method can be applied to studies in the field of cement-based materials, which require high chloride distribution accuracy.

  15. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  16. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  17. Estimation of spin contamination error in dissociative adsorption of Au2 onto MgO(0 0 1) surface: First application of approximate spin projection (AP) method to plane wave basis

    Science.gov (United States)

    Tada, Kohei; Koga, Hiroaki; Okumura, Mitsutaka; Tanaka, Shingo

    2018-06-01

    Spin contamination error in the total energy of the Au2/MgO system was estimated using the density functional theory/plane-wave scheme and approximate spin projection methods. This is the first investigation in which the errors in chemical phenomena on a periodic surface are estimated. The spin contamination error of the system was 0.06 eV. This value is smaller than that of the dissociation of Au2 in the gas phase (0.10 eV). This is because of the destabilization of the singlet spin state due to the weakening of the Au-Au interaction caused by the Au-MgO interaction.

  18. Analysis of strain error sources in micro-beam Laue diffraction

    International Nuclear Information System (INIS)

    Hofmann, Felix; Eve, Sophie; Belnoue, Jonathan; Micha, Jean-Sébastien; Korsunsky, Alexander M.

    2011-01-01

    Micro-beam Laue diffraction is an experimental method that allows the measurement of local lattice orientation and elastic strain within individual grains of engineering alloys, ceramics, and other polycrystalline materials. Unlike other analytical techniques, e.g. based on electron microscopy, it is not limited to surface characterisation or thin sections, but rather allows non-destructive measurements in the material bulk. This is of particular importance for in situ loading experiments where the mechanical response of a material volume (rather than just surface) is studied and it is vital that no perturbation/disturbance is introduced by the measurement technique. Whilst the technique allows lattice orientation to be determined to a high level of precision, accurate measurement of elastic strains and estimating the errors involved is a significant challenge. We propose a simulation-based approach to assess the elastic strain errors that arise from geometrical perturbations of the experimental setup. Using an empirical combination rule, the contributions of different geometrical uncertainties to the overall experimental strain error are estimated. This approach was applied to the micro-beam Laue diffraction setup at beamline BM32 at the European Synchrotron Radiation Facility (ESRF). Using a highly perfect germanium single crystal, the mechanical stability of the instrument was determined and hence the expected strain errors predicted. Comparison with the actual strain errors found in a silicon four-point beam bending test showed good agreement. The simulation-based error analysis approach makes it possible to understand the origins of the experimental strain errors and thus allows a directed improvement of the experimental geometry to maximise the benefit in terms of strain accuracy.

  19. Remote Sensing of Atlanta's Urban Sprawl and the Distribution of Land Cover and Surface Temperatures

    Science.gov (United States)

    Laymon, Charles A.; Estes, Maurice G., Jr.; Quattrochi, Dale A.; Arnold, James E. (Technical Monitor)

    2001-01-01

    Between 1973 and 1992, an average of 20 ha of forest was lost each day to urban expansion of Atlanta, Georgia. Urban surfaces have very different thermal properties than natural surfaces-storing solar energy throughout the day and continuing to release it as sensible heat well after sunset. The resulting heat island effect serves as catalysts for chemical reactions from vehicular exhaust and industrialization leading to a deterioration in air quality. In this study, high spatial resolution multispectral remote sensing data has been used to characterize the type, thermal properties, and distribution of land surface materials throughout the Atlanta metropolitan area. Ten-meter data were acquired with the Advanced Thermal and Land Applications Sensor (ATLAS) on May 11 and 12, 1997. ATLAS is a 15-channel multispectral scanner that incorporates the Landsat TM bands with additional bands in the middle reflective infrared and thermal infrared range. The high spatial resolution permitted discrimination of discrete surface types (e.g., concrete, asphalt), individual structures (e.g., buildings, houses) and their associated thermal characteristics. There is a strong temperature contrast between vegetation and anthropomorphic features. Vegetation has a modal temperature at about 20 C, whereas asphalt shingles, pavement, and buildings have a modal temperature of about 39 C. Broad-leaf vegetation classes are indistinguishable on a thermal basis alone. There is slightly more variability (plus or minus 5 C) among the urban surfaces. Grasses, mixed vegetation and mixed urban surfaces are intermediate in temperature and are characterized by broader temperature distributions with modes of about 29 C. Thermal maps serve as a basis for understanding the distribution of "hotspots", i.e., how landscape features and urban fabric contribute the most heat to the lower atmosphere.

  20. Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.

    2014-01-01

    We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...

  1. Error rates and resource overheads of encoded three-qubit gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  2. Teamwork and Clinical Error Reporting among Nurses in Korean Hospitals

    OpenAIRE

    Jee-In Hwang, PhD; Jeonghoon Ahn, PhD

    2015-01-01

    Purpose: To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. Methods: The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitori...

  3. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  4. Bayesian network models for error detection in radiotherapy plans

    International Nuclear Information System (INIS)

    Kalet, Alan M; Ford, Eric C; Phillips, Mark H; Gennari, John H

    2015-01-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures. (paper)

  5. Decoy-state quantum key distribution with both source errors and statistical fluctuations

    International Nuclear Information System (INIS)

    Wang Xiangbin; Yang Lin; Peng Chengzhi; Pan Jianwei

    2009-01-01

    We show how to calculate the fraction of single-photon counts of the 3-intensity decoy-state quantum cryptography faithfully with both statistical fluctuations and source errors. Our results rely only on the bound values of a few parameters of the states of pulses.

  6. State-independent error-disturbance trade-off for measurement operators

    International Nuclear Information System (INIS)

    Zhou, S.S.; Wu, Shengjun; Chau, H.F.

    2016-01-01

    In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions – one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.

  7. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  8. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  9. Forecast Combination under Heavy-Tailed Errors

    Directory of Open Access Journals (Sweden)

    Gang Cheng

    2015-11-01

    Full Text Available Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.

  10. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    Science.gov (United States)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  11. Solutions on a high-speed wide-angle zoom lens with aspheric surfaces

    Science.gov (United States)

    Yamanashi, Takanori

    2012-10-01

    Recent development in CMOS and digital camera technology has accelerated the business and market share of digital cinematography. In terms of optical design, this technology has increased the need to carefully consider pixel pitch and characteristics of the imager. When the field angle at the wide end, zoom ratio, and F-number are specified, choosing an appropriate zoom lens type is crucial. In addition, appropriate power distributions and lens configurations are required. At points near the wide end of a zoom lens, it is known that an aspheric surface is an effective means to correct off-axis aberrations. On the other hand, optical designers have to focus on manufacturability of aspheric surfaces and perform required analysis with respect to the surface shape. Centration errors aside, it is also important to know the sensitivity to aspheric shape errors and their effect on image quality. In this paper, wide angle cine zoom lens design examples are introduced and their main characteristics are described. Moreover, technical challenges are pointed out and solutions are proposed.

  12. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    Science.gov (United States)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  13. Phase Error Caused by Speed Mismatch Analysis in the Line-Scan Defect Detection by Using Fourier Transform Technique

    Directory of Open Access Journals (Sweden)

    Eryi Hu

    2015-01-01

    Full Text Available The phase error caused by the speed mismatch issue is researched in the line-scan images capturing 3D profile measurement. The experimental system is constructed by a line-scan CCD camera, an object moving device, a digital fringe pattern projector, and a personal computer. In the experiment procedure, the detected object is moving relative to the image capturing system by using a motorized translation stage in a stable velocity. The digital fringe pattern is projected onto the detected object, and then the deformed patterns are captured and recorded in the computer. The object surface profile can be calculated by the Fourier transform profilometry. However, the moving speed mismatch error will still exist in most of the engineering application occasion even after an image system calibration. When the moving speed of the detected object is faster than the expected value, the captured image will be compressed in the moving direction of the detected object. In order to overcome this kind of measurement error, an image recovering algorithm is proposed to reconstruct the original compressed image. Thus, the phase values can be extracted much more accurately by the reconstructed images. And then, the phase error distribution caused by the speed mismatch is analyzed by the simulation and experimental methods.

  14. Distribution of near-surface permafrost in Alaska: estimates of present and future conditions

    Science.gov (United States)

    Pastick, Neal J.; Jorgenson, M. Torre; Wylie, Bruce K.; Nield, Shawn J.; Johnson, Kristofer D.; Finley, Andrew O.

    2015-01-01

    High-latitude regions are experiencing rapid and extensive changes in ecosystem composition and function as the result of increases in average air temperature. Increasing air temperatures have led to widespread thawing and degradation of permafrost, which in turn has affected ecosystems, socioeconomics, and the carbon cycle of high latitudes. Here we overcome complex interactions among surface and subsurface conditions to map nearsurface permafrost through decision and regression tree approaches that statistically and spatially extend field observations using remotely sensed imagery, climatic data, and thematic maps of a wide range of surface and subsurface biophysical characteristics. The data fusion approach generated medium-resolution (30-m pixels) maps of near-surface (within 1 m) permafrost, active-layer thickness, and associated uncertainty estimates throughout mainland Alaska. Our calibrated models (overall test accuracy of ~85%) were used to quantify changes in permafrost distribution under varying future climate scenarios assuming no other changes in biophysical factors. Models indicate that near-surface permafrost underlies 38% of mainland Alaska and that near-surface permafrost will disappear on 16 to 24% of the landscape by the end of the 21st Century. Simulations suggest that near-surface permafrost degradation is more probable in central regions of Alaska than more northerly regions. Taken together, these results have obvious implications for potential remobilization of frozen soil carbon pools under warmer temperatures. Additionally, warmer and drier conditions may increase fire activity and severity, which may exacerbate rates of permafrost thaw and carbon remobilization relative to climate alone. The mapping of permafrost distribution across Alaska is important for land-use planning, environmental assessments, and a wide-array of geophysical studies.

  15. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    Science.gov (United States)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  16. Seasonal and spatial distribution of metals in surface sediment of an urban estuary

    International Nuclear Information System (INIS)

    Buggy, Conor J.; Tobin, John M.

    2008-01-01

    Aquatic pollution by metals is of concern because of various toxic effects to marine life. The Tolka Estuary, Co. Dublin, Ireland, is a typical Irish urban estuary. It has a significant metal loading originating from the urban environment. Results of a 25 month analysis of cadmium, copper, lead and zinc spatial and temporal distribution over 10 sample locations in this estuary are presented in this paper. Metal concentrations were analysed using differential pulse polarography. Significant seasonal and spatial trends in metal distribution were observed over the 25 months. Sediment metal concentrations gradually increased (30-120%) in spring to a maximum at the end of summer which was followed by a decrease in winter months (30-60%). Sediment organic matter (OM) concentrations exhibited similar seasonal trends and a positive correlation between OM and metal distributions was observed, implying OM had an influence on metal distributions over time. - Assessment and correlation of the seasonal and spatial distribution of metals and organic matter in surface sediment of an urban estuary

  17. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  18. Space charge and magnet error simulations for the SNS accumulator ring

    International Nuclear Information System (INIS)

    Beebe-Wang, J.; Fedotov, A.V.; Wei, J.; Machida, S.

    2000-01-01

    The effects of space charge forces and magnet errors in the beam of the Spallation Neutron Source (SNS) accumulator ring are investigated. In this paper, the focus is on the emittance growth and halo/tail formation in the beam due to space charge with and without magnet errors. The beam properties of different particle distributions resulting from various injection painting schemes are investigated. Different working points in the design of SNS accumulator ring lattice are compared. The simulations in close-to-resonance condition in the presence of space charge and magnet errors are presented. (author)

  19. Efficacy of surface error corrections to density functional theory calculations of vacancy formation energy in transition metals.

    Science.gov (United States)

    Nandi, Prithwish Kumar; Valsakumar, M C; Chandra, Sharat; Sahu, H K; Sundar, C S

    2010-09-01

    We calculate properties like equilibrium lattice parameter, bulk modulus and monovacancy formation energy for nickel (Ni), iron (Fe) and chromium (Cr) using Kohn-Sham density functional theory (DFT). We compare the relative performance of local density approximation (LDA) and generalized gradient approximation (GGA) for predicting such physical properties for these metals. We also make a relative study between two different flavors of GGA exchange correlation functional, namely PW91 and PBE. These calculations show that there is a discrepancy between DFT calculations and experimental data. In order to understand this discrepancy in the calculation of vacancy formation energy, we introduce a correction for the surface intrinsic error corresponding to an exchange correlation functional using the scheme implemented by Mattsson et al (2006 Phys. Rev. B 73 195123) and compare the effectiveness of the correction scheme for Al and the 3d transition metals.

  20. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  1. Understanding error generation in fused deposition modeling

    Science.gov (United States)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  2. Errors due to random noise in velocity measurement using incoherent-scatter radar

    Directory of Open Access Journals (Sweden)

    P. J. S. Williams

    1996-12-01

    Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.

  3. Effect of antimony nano-scale surface-structures on a GaSb/AlAsSb distributed Bragg reflector

    International Nuclear Information System (INIS)

    Husaini, S.; Shima, D.; Ahirwar, P.; Rotter, T. J.; Hains, C. P.; Dang, T.; Bedford, R. G.; Balakrishnan, G.

    2013-01-01

    Effects of antimony crystallization on the surface of GaSb during low temperature molecular beam epitaxy growth are investigated. The geometry of these structures is studied via transmission electron and atomic force microscopies, which show the surface metal forms triangular-shaped, elongated nano-wires with a structured orientation composed entirely of crystalline antimony. By depositing antimony on a GaSb/AlAsSb distributed Bragg reflector, the field is localized within the antimony layer. Polarization dependent transmission measurements are carried out on these nano-structures deposited on a GaSb/AlAsSb distributed Bragg reflector. It is shown that the antimony-based structures at the surface favor transmission of light polarized perpendicular to the wires.

  4. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  5. Spatial distribution of potential near surface moisture flux at Yucca Mountain

    International Nuclear Information System (INIS)

    Flint, A.L.; Flint, L.E.

    1994-01-01

    An estimate of the areal distribution of present-day surface liquid moisture flux at Yucca Mountain was made using field measured water contents and laboratory measured rock properties. Using available data for physical and hydrologic properties (porosity, saturated hydraulic conductivity, moisture retention functions) of the volcanic rocks, surface lithologic units that are hydrologically similar were delineated. Moisture retention and relative permeability functions were assigned to each surface unit based on the similarity of the mean porosity and saturated hydraulic conductivity of the surface unit to laboratory samples of the same lithology. The potential flux into the mountain was estimated for each surface hydrologic unit using the mean saturated hydraulic conductivity for each unit and assuming all matrix flow. Using measured moisture profiles for each of the surface units, estimates were made of the depth at which seasonal fluctuations diminish and steady state downward flux conditions are likely to exist. The hydrologic properties at that depth were used with the current relative saturation of the tuff, to estimate flux as the unsaturated hydraulic conductivity. This method assumes a unit gradient. The range in estimated flux was 0.02 mm/yr for the welded Tiva Canyon to 13.4 mm/yr for the nonwelded Paintbrush Tuff. The areally averaged flux was 1.4 mm/yr. The major zones of high flux occur to the north of the potential repository boundary where the nonwelded tuffs are exposed in the major drainages

  6. Spatial distribution of potential near surface moisture flux at Yucca Mountain

    International Nuclear Information System (INIS)

    Flint, A.L.; Flint, L.E.

    1994-01-01

    An estimate of the areal distribution of present-day surface liquid moisture flux at Yucca Mountain was made using field measured water contents and laboratory measured rock properties. Using available data for physical and hydrologic properties (porosity, saturated hydraulic conductivity moisture retention functions) of the volcanic rocks, surface lithologic units that are hydrologically similar were delineated. Moisture retention and relative permeability functions were assigned to each surface unit based on the similarity of the mean porosity and saturated hydraulic conductivity of the surface unit to laboratory samples of the same lithology. The potential flux into the mountain was estimated for each surface hydrologic unit using the mean saturated hydraulic conductivity for each unit and assuming all matrix flow. Using measured moisture profiles for each of the surface units, estimates were made of the depth at which seasonal fluctuations diminish and steady state downward flux conditions are likely to exist. The hydrologic properties at that depth were used with the current relative saturation of the tuff, to estimate flux as the unsaturated hydraulic conductivity. This method assumes a unit gradient. The range in estimated flux was 0.02 mm/yr for the welded Tiva Canyon to 13.4 mm/yr for the nonwelded Paintbrush Tuff. The areally averaged flux was 1.4 mm/yr. The major zones of high flux occur to the north of the potential repository boundary where the nonwelded tuffs are exposed in the major drainages

  7. Comparison of the bidirectional reflectance distribution function of various surfaces

    International Nuclear Information System (INIS)

    Fernandez, R.; Seasholtz, R.G.; Oberle, L.G.; Kadambi, J.R.

    1989-01-01

    This paper describes the development and use of a system to measure the bidirectional reflectance distribution function (BRDF) of various surfaces. The BRDF measurements are to be used in the analysis and design of optical measurement systems such as laser anemometers. An Ar-ion laser (514 nm) was the light source. Preliminary results are presented for eight samples: two glossy black paints, two flat black paints, black glass, sand-blasted Al, unworked Al, and a white paint. A BaSO4 white reflectance standard was used as the reference sample throughout the tests. 8 refs

  8. Error analysis of the phase-shifting technique when applied to shadow moire

    International Nuclear Information System (INIS)

    Han, Changwoon; Han Bongtae

    2006-01-01

    An exact solution for the intensity distribution of shadow moire fringes produced by a broad spectrum light is presented. A mathematical study quantifies errors in fractional fringe orders determined by the phase-shifting technique, and its validity is corroborated experimentally. The errors vary cyclically as the distance between the reference grating and the specimen increases. The amplitude of the maximum error is approximately 0.017 fringe, which defines the theoretical limit of resolution enhancement offered by the phase-shifting technique

  9. Learning a locomotor task: with or without errors?

    Science.gov (United States)

    Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert

    2014-03-04

    Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them

  10. Analysing the impact of reflectance distributions and well geometries on vertical surface daylight levels in atria for overcast skies

    Energy Technology Data Exchange (ETDEWEB)

    Du, Jiangtao; Sharples, Steve [School of Architecture, University of Sheffield, Crookesmoor Building, Conduit Road, Sheffield S10 1FL (United Kingdom)

    2010-07-15

    This study investigated the impacts of different diffuse reflectance distributions and well geometries on vertical daylight factors and vertical internally reflected components in atria. Two forms of reflectance distribution patterns of wall surface were examined: horizontal and vertical reflectance band variation. The square atrium models studied have a broader WI range of 0.25-2.0, which represent shallow, medium and high atria. Radiance, a powerful package based on backward ray tracing technique, was used for the simulations of vertical daylight levels. The results show that different reflectance distributions of square atrium walls do have an impact on the vertical daylight factors and vertical internally reflected components under overcast sky condition. The impact relates to the orientation of the band with different reflectance distributions on the wall. Compared with the vertical band surface, the horizontal band surface has a much more complicated effect. The horizontal distributions of the reflectances significantly affects the vertical daylight levels at the locations more than 30% atrium height on the wall. For an atrium with a height more than 1/2 the width, the effect tends to increase with the increasing well index. The vertical distributions of the reflectance, nevertheless, do not substantially take effect on the vertical daylight levels in atria except for some special reflectance distribution patterns. (author)

  11. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  12. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  13. Principal Stratification in sample selection problems with non normal error terms

    DEFF Research Database (Denmark)

    Rocci, Roberto; Mellace, Giovanni

    The aim of the paper is to relax distributional assumptions on the error terms, often imposed in parametric sample selection models to estimate causal effects, when plausible exclusion restrictions are not available. Within the principal stratification framework, we approximate the true distribut...... an application to the Job Corps training program....

  14. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  15. Bidirectional reflectance distribution function modeling of one-dimensional rough surface in the microwave band

    International Nuclear Information System (INIS)

    Guo Li-Xin; Gou Xue-Yin; Zhang Lian-Bo

    2014-01-01

    In this study, the bidirectional reflectance distribution function (BRDF) of a one-dimensional conducting rough surface and a dielectric rough surface are calculated with different frequencies and roughness values in the microwave band by using the method of moments, and the relationship between the bistatic scattering coefficient and the BRDF of a rough surface is expressed. From the theory of the parameters of the rough surface BRDF, the parameters of the BRDF are obtained using a genetic algorithm. The BRDF of a rough surface is calculated using the obtained parameter values. Further, the fitting values and theoretical calculations of the BRDF are compared, and the optimization results are in agreement with the theoretical calculation results. Finally, a reference for BRDF modeling of a Gaussian rough surface in the microwave band is provided by the proposed method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  16. Comparing different error conditions in filmdosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2005-01-01

    Full text: In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error conditions are recognized. These are errors that might have an influence on the ability to make a correct evaluation of the dose value, and include broken, contaminated or improperly handled dosemeters. In this project we have examined how two services (NIRH, GSF), from two different countries within the EU, mark their dosemeters. The services have a large difference in size, customer composition and issuing period, but both use film as their primary dosemeters. The possible error conditions that are examined here are dosemeters being contaminated, dosemeters exposed to moisture or light, missing filters in the dosemeter badges among others. The data are collected for the year 2003 where NIRH evaluated approximately 50 thousand and GSF about one million filmdosemeters. For each error condition the percentage of filmdosemeters belonging hereto is calculated as well as the distribution among different employee categories, i.e. industry, medicine, research, veterinary and other. For some error conditions we see a common pattern, while for others there is a large discrepancy between the services. The differences and possible explanations are discussed. The results of the investigation may motivate further comparisons between the different monitoring services in Europe. (author)

  17. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  18. A Universal Isotherm Model to Capture Adsorption Uptake and Energy Distribution of Porous Heterogeneous Surface

    KAUST Repository

    Ng, Kim Choon; Burhan, Muhammad; Shahzad, Muhammad Wakil; Ismail, Azahar Bin

    2017-01-01

    The adsorbate-adsorbent thermodynamics are complex as it is influenced by the pore size distributions, surface heterogeneity and site energy distribution, as well as the adsorbate properties. Together, these parameters defined the adsorbate uptake forming the state diagrams, known as the adsorption isotherms, when the sorption site energy on the pore surfaces are favorable. The available adsorption models for describing the vapor uptake or isotherms, hitherto, are individually defined to correlate to a certain type of isotherm patterns. There is yet a universal approach in developing these isotherm models. In this paper, we demonstrate that the characteristics of all sorption isotherm types can be succinctly unified by a revised Langmuir model when merged with the concepts of Homotattic Patch Approximation (HPA) and the availability of multiple sets of site energy accompanied by their respective fractional probability factors. The total uptake (q/q*) at assorted pressure ratios (P/P s ) are inextricably traced to the manner the site energies are spread, either naturally or engineered by scientists, over and across the heterogeneous surfaces. An insight to the porous heterogeneous surface characteristics, in terms of adsorption site availability has been presented, describing the unique behavior of each isotherm type.

  19. A Universal Isotherm Model to Capture Adsorption Uptake and Energy Distribution of Porous Heterogeneous Surface

    KAUST Repository

    Ng, Kim Choon

    2017-08-31

    The adsorbate-adsorbent thermodynamics are complex as it is influenced by the pore size distributions, surface heterogeneity and site energy distribution, as well as the adsorbate properties. Together, these parameters defined the adsorbate uptake forming the state diagrams, known as the adsorption isotherms, when the sorption site energy on the pore surfaces are favorable. The available adsorption models for describing the vapor uptake or isotherms, hitherto, are individually defined to correlate to a certain type of isotherm patterns. There is yet a universal approach in developing these isotherm models. In this paper, we demonstrate that the characteristics of all sorption isotherm types can be succinctly unified by a revised Langmuir model when merged with the concepts of Homotattic Patch Approximation (HPA) and the availability of multiple sets of site energy accompanied by their respective fractional probability factors. The total uptake (q/q*) at assorted pressure ratios (P/P s ) are inextricably traced to the manner the site energies are spread, either naturally or engineered by scientists, over and across the heterogeneous surfaces. An insight to the porous heterogeneous surface characteristics, in terms of adsorption site availability has been presented, describing the unique behavior of each isotherm type.

  20. Modified retrieval algorithm for three types of precipitation distribution using x-band synthetic aperture radar

    Science.gov (United States)

    Xie, Yanan; Zhou, Mingliang; Pan, Dengke

    2017-10-01

    The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.

  1. The Most Common Geometric and Semantic Errors in CityGML Datasets

    Science.gov (United States)

    Biljecki, F.; Ledoux, H.; Du, X.; Stoter, J.; Soon, K. H.; Khoo, V. H. S.

    2016-10-01

    To be used as input in most simulation and modelling software, 3D city models should be geometrically and topologically valid, and semantically rich. We investigate in this paper what is the quality of currently available CityGML datasets, i.e. we validate the geometry/topology of the 3D primitives (Solid and MultiSurface), and we validate whether the semantics of the boundary surfaces of buildings is correct or not. We have analysed all the CityGML datasets we could find, both from portals of cities and on different websites, plus a few that were made available to us. We have thus validated 40M surfaces in 16M 3D primitives and 3.6M buildings found in 37 CityGML datasets originating from 9 countries, and produced by several companies with diverse software and acquisition techniques. The results indicate that CityGML datasets without errors are rare, and those that are nearly valid are mostly simple LOD1 models. We report on the most common errors we have found, and analyse them. One main observation is that many of these errors could be automatically fixed or prevented with simple modifications to the modelling software. Our principal aim is to highlight the most common errors so that these are not repeated in the future. We hope that our paper and the open-source software we have developed will help raise awareness for data quality among data providers and 3D GIS software producers.

  2. Global Validation of MODIS Atmospheric Profile-Derived Near-Surface Air Temperature and Dew Point Estimates

    Science.gov (United States)

    Famiglietti, C.; Fisher, J.; Halverson, G. H.

    2017-12-01

    This study validates a method of remote sensing near-surface meteorology that vertically interpolates MODIS atmospheric profiles to surface pressure level. The extraction of air temperature and dew point observations at a two-meter reference height from 2001 to 2014 yields global moderate- to fine-resolution near-surface temperature distributions that are compared to geographically and temporally corresponding measurements from 114 ground meteorological stations distributed worldwide. This analysis is the first robust, large-scale validation of the MODIS-derived near-surface air temperature and dew point estimates, both of which serve as key inputs in models of energy, water, and carbon exchange between the land surface and the atmosphere. Results show strong linear correlations between remotely sensed and in-situ near-surface air temperature measurements (R2 = 0.89), as well as between dew point observations (R2 = 0.77). Performance is relatively uniform across climate zones. The extension of mean climate-wise percent errors to the entire remote sensing dataset allows for the determination of MODIS air temperature and dew point uncertainties on a global scale.

  3. Surface potential measurement of negative-ion-implanted insulators by analysing secondary electron energy distribution

    International Nuclear Information System (INIS)

    Toyota, Yoshitaka; Tsuji, Hiroshi; Nagumo, Syoji; Gotoh, Yasuhito; Ishikawa, Junzo; Sakai, Shigeki.

    1994-01-01

    The negative ion implantation method we have proposed is a noble technique which can reduce surface charging of isolated electrodes by a large margin. In this paper, the way to specify the surface potential of negative-ion-implanted insulators by the secondary electron energy analysis is described. The secondary electron energy distribution is obtained by a retarding field type energy analyzer. The result shows that the surface potential of fused quartz by negative-ion implantation (C - with the energy of 10 keV to 40 keV) is negatively charged by only several volts. This surface potential is extremely low compared with that by positive-ion implantation. Therefore, the negative-ion implantation is a very effective method for charge-up free implantation without charge compensation. (author)

  4. The interaction of the flux errors and transport errors in modeled atmospheric carbon dioxide concentrations

    Science.gov (United States)

    Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.

    2017-12-01

    Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter

  5. The DiskMass Survey. II. Error Budget

    Science.gov (United States)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  6. Energy distributions of neutral species ejected from well-characterized surfaces measured by means of multiphoton resonance ionization spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, D.; Ishigami, R.; Dhole, S.D.; Morita, K. E-mail: k-morita@mail.nucl.nagoya-u.ac.jp

    2000-04-01

    The energy distributions of neutral atoms ejected from the polycrystalline Cu target, the Si(1 1 1)-7x7 surface, and the Si(1 1 1)-''5 x 5''-Cu surface by 5 keV Ar{sup +} ion bombardment have been measured with very high efficiency by means of the multi-photon resonance ionization spectroscopy, in order to obtain the surface binding energies. The energy distributions for Cu from polycrystalline Cu target, Si from the Si(1 1 1)-7x7 surface, and Cu from the Si(1 1 1)-''5 x 5''-Cu surface have been found to have a peak at energies of around 3.0, 5.0 and 1.5 eV, and the function shapes of high energy tails to be proportional to E{sup -1.9}, E{sup -1.2} and E{sup -1.3}, respectively. Based on the linear collision cascade theory, the surface binding energies are determined to be 5.7, 6.0 and 2.0 eV, and the power factor m in the power law approximation to the Thomas-Fermi potential are determined to be 0.1, 0.4 and 0.3 for Cu from the Cu polycrystalline, Si from the Si(1 1 1)-7x7 surface, and Cu from the Si(1 1 1)-''5 x 5''-Cu surface, respectively. In conclusion, the results indicate that the energy distributions of ejected particles are well characterized by the linear collision cascade theory developed by Sigmund.

  7. Development of a high precision dosimetry system for the measurement of surface dose rate distribution for eye applicators

    Energy Technology Data Exchange (ETDEWEB)

    Eichmann, Marion; Fluehs, Dirk; Spaan, Bernhard [Fakultaet Physik, Technische Universitaet Dortmund, D 44221 Dortmund (Germany); Klinische Strahlenphysik, Universitaetsklinikum Essen, D 45122 Essen (Germany); Fakultaet Physik, Technische Universitaet Dortmund, D 44221 Dortmund (Germany)

    2009-10-15

    Purpose: The therapeutic outcome of the therapy with ophthalmic applicators is highly dependent on the application of a sufficient dose to the tumor, whereas the dose applied to the surrounding tissue needs to be minimized. The goal for the newly developed apparatus described in this work is the determination of the individual applicator surface dose rate distribution with a high spatial resolution and a high precision in dose rate with respect to time and budget constraints especially important for clinical procedures. Inhomogeneities of the dose rate distribution can be detected and taken into consideration for the treatment planning. Methods: In order to achieve this, a dose rate profile as well as a surface profile of the applicator are measured and correlated with each other. An instrumental setup has been developed consisting of a plastic scintillator detector system and a newly designed apparatus for guiding the detector across the applicator surface at a constant small distance. It performs an angular movement of detector and applicator with high precision. Results: The measurements of surface dose rate distributions discussed in this work demonstrate the successful operation of the measuring setup. Measuring the surface dose rate distribution with a small distance between applicator and detector and with a high density of measuring points results in a complete and gapless coverage of the applicator surface, being capable of distinguishing small sized spots with high activities. The dosimetrical accuracy of the measurements and its analysis is sufficient (uncertainty in the dose rate in terms of absorbed dose to water is <7%), especially when taking the surgical techniques in positioning of the applicator on the eyeball into account. Conclusions: The method developed so far allows a fully automated quality assurance of eye applicators even under clinical conditions. These measurements provide the basis for future calculation of a full 3D dose rate

  8. Development of a high precision dosimetry system for the measurement of surface dose rate distribution for eye applicators.

    Science.gov (United States)

    Eichmann, Marion; Flühs, Dirk; Spaan, Bernhard

    2009-10-01

    The therapeutic outcome of the therapy with ophthalmic applicators is highly dependent on the application of a sufficient dose to the tumor, whereas the dose applied to the surrounding tissue needs to be minimized. The goal for the newly developed apparatus described in this work is the determination of the individual applicator surface dose rate distribution with a high spatial resolution and a high precision in dose rate with respect to time and budget constraints especially important for clinical procedures. Inhomogeneities of the dose rate distribution can be detected and taken into consideration for the treatment planning. In order to achieve this, a dose rate profile as well as a surface profile of the applicator are measured and correlated with each other. An instrumental setup has been developed consisting of a plastic scintillator detector system and a newly designed apparatus for guiding the detector across the applicator surface at a constant small distance. It performs an angular movement of detector and applicator with high precision. The measurements of surface dose rate distributions discussed in this work demonstrate the successful operation of the measuring setup. Measuring the surface dose rate distribution with a small distance between applicator and detector and with a high density of measuring points results in a complete and gapless coverage of the applicator surface, being capable of distinguishing small sized spots with high activities. The dosimetrical accuracy of the measurements and its analysis is sufficient (uncertainty in the dose rate in terms of absorbed dose to water is <7%), especially when taking the surgical techniques in positioning of the applicator on the eyeball into account. The method developed so far allows a fully automated quality assurance of eye applicators even under clinical conditions. These measurements provide the basis for future calculation of a full 3D dose rate distribution, which then can be used as input for

  9. Understanding error generation in fused deposition modeling

    International Nuclear Information System (INIS)

    Bochmann, Lennart; Transchel, Robert; Wegener, Konrad; Bayley, Cindy; Helu, Moneer; Dornfeld, David

    2015-01-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08–0.30 mm) are generally greater than in the x direction (0.12–0.62 mm) and the z direction (0.21–0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology. (paper)

  10. Land surface temperature distribution and development for green open space in Medan city using imagery-based satellite Landsat 8

    Science.gov (United States)

    Sulistiyono, N.; Basyuni, M.; Slamet, B.

    2018-03-01

    Green open space (GOS) is one of the requirements where a city is comfortable to stay. GOS might reduce land surface temperature (LST) and air pollution. Medan is one of the biggest towns in Indonesia that experienced rapid development. However, the early development tends to neglect the GOS existence for the city. The objective of the study is to determine the distribution of land surface temperature and the relationship between the normalized difference vegetation index (NDVI) and the priority of GOS development in Medan City using imagery-based satellite Landsat 8. The method approached to correlate the distribution of land surface temperature derived from the value of digital number band 10 with the NDVI which was from the ratio of groups five and four on satellite images of Landsat 8. The results showed that the distribution of land surface temperature in the Medan City in 2016 ranged 20.57 - 33.83 °C. The relationship between the distribution of LST distribution with NDVI was reversed with a negative correlation of -0.543 (sig 0,000). The direction of GOS in Medan City is therefore developed on the allocation of LST and divided into three priority classes namely first priority class had 5,119.71 ha, the second priority consisted of 16,935.76 ha, and third priority of 6,118.50 ha.

  11. Characterizing a four-qubit planar lattice for arbitrary error detection

    Science.gov (United States)

    Chow, Jerry M.; Srinivasan, Srikanth J.; Magesan, Easwar; Córcoles, A. D.; Abraham, David W.; Gambetta, Jay M.; Steffen, Matthias

    2015-05-01

    Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015].

  12. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  13. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  14. Error Management in ATLAS TDAQ: An Intelligent Systems approach

    CERN Document Server

    Slopper, John Erik

    2010-01-01

    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classication. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classication techniques and the factors specic to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered fro...

  15. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  16. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  17. A preliminary investigation of the distribution of heavy metals in surface sediments of the Cona tidal marsh (Venice Lagoon)

    International Nuclear Information System (INIS)

    Bernardi, S.; Costa, F.; Vazzoler, S.; Zonta, R.

    1988-01-01

    Data are from the two series of surface sediment sampling in an interface area between the Venice Lagoon and the mainland. The distribution of heavy metals gives a correlation with polluted sourcesites-identified in the channel systems with a highly polluted input-and allows us to identify the localities of accumulation. Restricted to the estuary of the river tributary transporting a high concentration of pollutants into a tidal marsh area of the lagoon, the study shows the effect of the fresh water forcing to distribute heavy metals on surface sediments. Within the scope of this preliminary investigation, indications from sampling identify a sector of the 'palude of Cona' in this estuary, which is highly suitable for detailed studies on precesses affecting heavy-metal distributions in bottom surface sediments of shallow-water areas

  18. Identifying medication error chains from critical incident reports: a new analytic approach.

    Science.gov (United States)

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.

  19. Calculated energy distributions for light 0.25--18-keV ions scattered from solid surfaces

    International Nuclear Information System (INIS)

    Robinson, J.E.; Harms, A.A.; Karapetsas, S.K.

    1975-01-01

    Scattered energy distributions are calculated for light ions incident on Nb and Mo surfaces of interest for controlled nulcear fusion reactors. The scattered energy is found to vary as a function of the reflection coefficient between a multiple-collision limit at low energies and a single-collision Rutherford scattering limit at high energies. High-energy peaking of the scattered particle distributions is also found for low incident energies

  20. Energy spectrum of surface electrons over a {sup 3}He – {sup 4}He solution with a spatially non-uniform distribution of the light isotope

    Energy Technology Data Exchange (ETDEWEB)

    Bezsmolnyy, Ya.Yu.; Sokolova, E.S.; Sokolov, S.S. [B.Verkin Institute for Low Temperature Physics and Engineering of the National Academy of Sciences of Ukraine, 47 Prospekt Nauky, 61103 Kharkov (Ukraine); Studart, Nelson [Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, Av. dos Estados, 5001, 09210-580 Santo André, São Paulo (Brazil); Departamento de Física, Universidade Federal de São Carlos, via Washington Luís, km 235, 13565-905 Säo Carlos, São Paulo (Brazil)

    2017-02-15

    The energy gap between the ground and first excited energy levels of surface electrons deposited over a dilute {sup 3}He - {sup 4}He solution is evaluated. Two spatial distributions of {sup 3}He atoms near the free surface solution are considered. One consists of a thin though macroscopic {sup 3}He film and in the other the {sup 3}He concentration varies continuously from the surface inside the liquid. The energy gap is calculated as a function of the parameters of the {sup 3}He spatial distribution for these distributions. It is shown that the energy gap dependence on the distribution parameters allows using measurements of intersubband transitions of the surface electrons to determine the {sup 3}He concentration distribution and, in principle, the nature of the spatial distribution of the light isotope near the surface of the solution.

  1. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  2. Etalon (standard) for surface potential distribution produced by electric activity of the heart.

    Science.gov (United States)

    Szathmáry, V; Ruttkay-Nedecký, I

    1981-01-01

    The authors submit etalon (standard) equipotential maps as an aid in the evaluation of maps of surface potential distributions in living subjects. They were obtained by measuring potentials on the surface of an electrolytic tank shaped like the thorax. The individual etalon maps were determined in such a way that the parameters of the physical dipole forming the source of the electric field in the tank corresponded to the mean vectorcardiographic parameters measured in a healthy population sample. The technique also allows a quantitative estimate of the degree of non-dipolarity of the heart as the source of the electric field.

  3. Influence of error fields on the plasma confining field and the plasma confinement in tokamak

    International Nuclear Information System (INIS)

    Matsuda, Shinzaburo

    1977-05-01

    Influence of error fields on the plasma confining field and the plasma confinement is treated in the standpoint of design. In the initial breakdown phase before formation of the closed magnetic surfaces, the vertical field properly applied is the most important. Once the magnetic surfaces are formed, the non-axisymmetric error field is important. Effect of the shell gap associated with iron core and with pulsed vertical coils is thus studied. The formation of magnetic islands due to the external non-axisymmetric error field is studied with a simple model. A method of suppressing the islands by choosing the minor periodicity is proposed. (auth.)

  4. Dose variations caused by setup errors in intracranial stereotactic radiotherapy: A PRESAGE study

    International Nuclear Information System (INIS)

    Teng, Kieyin; Gagliardi, Frank; Alqathami, Mamdooh; Ackerly, Trevor; Geso, Moshi

    2014-01-01

    Stereotactic radiotherapy (SRT) requires tight margins around the tumor, thus producing a steep dose gradient between the tumor and the surrounding healthy tissue. Any setup errors might become clinically significant. To date, no study has been performed to evaluate the dosimetric variations caused by setup errors with a 3-dimensional dosimeter, the PRESAGE. This research aimed to evaluate the potential effect that setup errors have on the dose distribution of intracranial SRT. Computed tomography (CT) simulation of a CIRS radiosurgery head phantom was performed with 1.25-mm slice thickness. An ideal treatment plan was generated using Brainlab iPlan. A PRESAGE was made for every treatment with and without errors. A prescan using the optical CT scanner was carried out. Before treatment, the phantom was imaged using Brainlab ExacTrac. Actual radiotherapy treatments with and without errors were carried out with the Novalis treatment machine. Postscan was performed with an optical CT scanner to analyze the dose irradiation. The dose variation between treatments with and without errors was determined using a 3-dimensional gamma analysis. Errors are clinically insignificant when the passing ratio of the gamma analysis is 95% and above. Errors were clinically significant when the setup errors exceeded a 0.7-mm translation and a 0.5° rotation. The results showed that a 3-mm translation shift in the superior-inferior (SI), right-left (RL), and anterior-posterior (AP) directions and 2° couch rotation produced a passing ratio of 53.1%. Translational and rotational errors of 1.5 mm and 1°, respectively, generated a passing ratio of 62.2%. Translation shift of 0.7 mm in the directions of SI, RL, and AP and a 0.5° couch rotation produced a passing ratio of 96.2%. Preventing the occurrences of setup errors in intracranial SRT treatment is extremely important as errors greater than 0.7 mm and 0.5° alter the dose distribution. The geometrical displacements affect dose delivery

  5. Distribution system modeling and analysis

    CERN Document Server

    Kersting, William H

    2001-01-01

    For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...

  6. Quantitative Transmission Electron Microscopy of Nanoparticles and Thin-Film Formation in Electroless Metallization of Polymeric Surfaces

    Science.gov (United States)

    Dutta, Aniruddha; Heinrich, Helge; Kuebler, Stephen; Grabill, Chris; Bhattacharya, Aniket

    2011-03-01

    Gold nanoparticles(Au-NPs) act as nucleation sites for electroless deposition of silver on functionalized SU8 polymeric surfaces. Here we report the nanoscale morphology of Au and Ag nanoparticles as studied by Transmission Electron Microscopy (TEM). Scanning TEM with a high-angle annular dark-field detector is used to obtain atomic number contrast. From the intensity-calibrated plan-view scanning TEM images we determine the mean thickness and the volume distribution of the Au-NPs on the surface of the functionalized polymer. We also report the height and the radius distribution of the gold nanoparticles obtained from STEM images taking into consideration the experimental errors. The cross sectional TEM images yield the density and the average distance of the Au and Ag nanoparticles on the surface of the polymer. Supported by grant NSF, Chemistry Division.

  7. Error Analysis Of Clock Time (T), Declination (*) And Latitude ...

    African Journals Online (AJOL)

    ), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...

  8. An Analysis of Error Reconciliation Protocols for use in Quantum Key Distribution

    Science.gov (United States)

    2012-02-01

    INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN // CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR...of the messages passed, and that the time to prepare or separate the message information is negligible . Finally, for this experiment all errors...of interactions becomes negligible . In fact, of the three protocols, experiments performed here have shown that Winnow produces the highest average

  9. A correction for emittance-measurement errors caused by finite slit and collector widths

    International Nuclear Information System (INIS)

    Connolly, R.C.

    1992-01-01

    One method of measuring the transverse phase-space distribution of a particle beam is to intercept the beam with a slit and measure the angular distribution of the beam passing through the slit using a parallel-strip collector. Together the finite widths of the slit and each collector strip form an acceptance window in phase space whose size and orientation are determined by the slit width, the strip width, and the slit-collector distance. If a beam is measured using a detector with a finite-size phase-space window, the measured distribution is different from the true distribution. The calculated emittance is larger than the true emittance, and the error depends both on the dimensions of the detector and on the Courant-Snyder parameters of the beam. Specifically, the error gets larger as the beam drifts farther from a waist. This can be important for measurements made on high-brightness beams, since power density considerations require that the beam be intercepted far from a waist. In this paper we calculate the measurement error and we show how the calculated emittance and Courant-Snyder parameters can be corrected for the effects of finite sizes of slit and collector. (Author) 5 figs., 3 refs

  10. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Mark [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Tuen Mun Hospital, Hong Kong (China); Grehn, Melanie [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Cremers, Florian [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Siebert, Frank-Andre [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Wurster, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Department for Radiation Oncology, University Medicine Greifswald, Greifswald (Germany); Huttenlocher, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Dunst, Jürgen [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Department for Radiation Oncology, University Clinic Copenhagen, Copenhagen (Denmark); Hildebrandt, Guido [Department for Radiation Oncology, University Medicine Rostock, Rostock (Germany); Schweikard, Achim [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Rades, Dirk [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Ernst, Floris [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); and others

    2017-03-15

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  11. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    International Nuclear Information System (INIS)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris

    2017-01-01

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  12. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  13. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  14. Atomic-level spatial distributions of dopants on silicon surfaces: toward a microscopic understanding of surface chemical reactivity

    Science.gov (United States)

    Hamers, Robert J.; Wang, Yajun; Shan, Jun

    1996-11-01

    We have investigated the interaction of phosphine (PH 3) and diborane (B 2H 6) with the Si(001) surface using scanning tunneling microscopy, infrared spectroscopy, and ab initio molecular orbital calculations. Experiment and theory show that the formation of PSi heterodimers is energetically favorable compared with formation of PP dimers. The stability of the heterodimers arises from a large strain energy associated with formation of PP dimers. At moderate P coverages, the formation of PSi heterodimers leaves the surface with few locations where there are two adjacent reactive sites. This in turn modifies the chemical reactivity toward species such as PH 3, which require only one site to adsorb but require two adjacent sites to dissociate. Boron on Si(001) strongly segregates into localized regions of high boron concentration, separated by large regions of clean Si. This leads to a spatially-modulated chemical reactivity which during subsequent growth by chemical vapor deposition (CVD) leads to formation of a rough surface. The implications of the atomic-level spatial distribution of dopants on the rates and mechanisms of CVD growth processes are discussed.

  15. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  16. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  17. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  18. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    Science.gov (United States)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  19. Second order classical perturbation theory for atom surface scattering: Analysis of asymmetry in the angular distribution

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yun, E-mail: zhou.yun.x@gmail.com; Pollak, Eli, E-mail: eli.pollak@weizmann.ac.il [Chemical Physics Department, Weizmann Institute of Science, 76100 Rehovot (Israel); Miret-Artés, Salvador, E-mail: s.miret@iff.csic.es [Instituto de Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 123, 28006 Madrid (Spain)

    2014-01-14

    A second order classical perturbation theory is developed and applied to elastic atom corrugated surface scattering. The resulting theory accounts for experimentally observed asymmetry in the final angular distributions. These include qualitative features, such as reduction of the asymmetry in the intensity of the rainbow peaks with increased incidence energy as well as the asymmetry in the location of the rainbow peaks with respect to the specular scattering angle. The theory is especially applicable to “soft” corrugated potentials. Expressions for the angular distribution are derived for the exponential repulsive and Morse potential models. The theory is implemented numerically to a simplified model of the scattering of an Ar atom from a LiF(100) surface.

  20. Second order classical perturbation theory for atom surface scattering: analysis of asymmetry in the angular distribution.

    Science.gov (United States)

    Zhou, Yun; Pollak, Eli; Miret-Artés, Salvador

    2014-01-14

    A second order classical perturbation theory is developed and applied to elastic atom corrugated surface scattering. The resulting theory accounts for experimentally observed asymmetry in the final angular distributions. These include qualitative features, such as reduction of the asymmetry in the intensity of the rainbow peaks with increased incidence energy as well as the asymmetry in the location of the rainbow peaks with respect to the specular scattering angle. The theory is especially applicable to "soft" corrugated potentials. Expressions for the angular distribution are derived for the exponential repulsive and Morse potential models. The theory is implemented numerically to a simplified model of the scattering of an Ar atom from a LiF(100) surface.

  1. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    Science.gov (United States)

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  2. Drug administration errors in an institution for individuals with intellectual disability : an observational study

    NARCIS (Netherlands)

    van den Bemt, P M L A; Robertz, R; de Jong, A L; van Roon, E N; Leufkens, H G M

    BACKGROUND: Medication errors can result in harm, unless barriers to prevent them are present. Drug administration errors are less likely to be prevented, because they occur in the last stage of the drug distribution process. This is especially the case in non-alert patients, as patients often form

  3. [Distribution and sources of oxygen and sulfur heterocyclic aromatic compounds in surface soil of Beijing, China].

    Science.gov (United States)

    He, Guang-Xiu; Zhang, Zhi-Huan; Peng, Xu-Yang; Zhu, Lei; Lu, Ling

    2011-11-01

    62 surface soil samples were collected from different environmental function zones in Beijing. Sulfur and oxygen heterocyclic aromatic compounds were detected by GC/MS. The objectives of this study were to identify the composition and distribution of these compounds, and discuss their sources. The results showed that the oxygen and sulfur heterocyclic aromatic compounds in the surface soils mainly contained dibenzofuran, methyl- and C2-dibenzofuran series, dibenzothiophene, methyl-, C2- and C3-dibenzothiophene series and benzonaphthothiophene series. The composition and distribution of the oxygen and sulfur heterocyclic aromatic compounds in the surface soil samples varied in the different environmental function zones, of which some factories and the urban area received oxygen and sulfur heterocyclic aromatic compounds most seriously. In Beijing, the degree of contamination by oxygen and sulfur heterocyclic aromatic compounds in the north surface soil was higher than that in the south. There were preferable linear correlations between the concentration of dibenzofuran series and fluorene series, as well as the concentration of dibenzothiophene series and dibenzofuran series. The oxygen and sulfur heterocyclic aromatic compounds in the surface soil were mainly derived from combustion products of oil and coal and direct input of mineral oil, etc. There were some variations in pollution sources of different environmental function zones.

  4. Distribution of the Ammoniated Species on the Surface of Ceres

    Science.gov (United States)

    Ammannito, E.; De Sanctis, M. C.; Carrorro, F. G.; Ciarniello, M.; Combe, J. P.; De Angelis, S.; Ehlmann, B. L.; Frigeri, A.; Marchi, S.; McSween, H. Y., Jr.; Raponi, A.; Toplis, M. J.; Tosi, F.; Castillo, J. C.; Capaccioni, F.; Capria, M. T.; Fonte, S.; Giardino, M.; Jaumann, R.; Longobardo, A.; Joy, S. P.; Magni, G.; McCord, T. B.; McFadden, L. A.; Palomba, E.; Pieters, C. M.; Polanskey, C. A.; Prettyman, T. H.; Rayman, M.; Raymond, C. A.; Schenk, P.; Zambon, F.; Russell, C. T.

    2016-12-01

    The Dawn spacecraft has been acquiring data on dwarf planet Ceres since January 2015 (1). The VIR spectrometer (0.25-5.0 μm) acquired data at different altitudes providing information on the composition of the surface of Ceres at resolutions ranging from few kilometers to about one hundred meters (2). The average spectrum of Ceres is well represented by a mixture of dark minerals, Mg- phyllosilicates, ammoniated clays, and Mg carbonates (3). This result confirms previous studies based on ground based spectra (4, 5). Maps of the surface at about 1 km/px show that the components identified in the average spectrum are present all across the surface with variations in their relative abundance (6). Some localized areas however have peculiar spectral characteristics. One example is the spectrum of the bright regions within Occator crater that is most consistent with a large amount of Na-carbonates and possibly ammonium salts (7). The presence of ammoniated species poses a constraint on the pH and redox condition during the evolution of Ceres. Therefore, we have studied the distribution across the surface of such species to better understand the evolutionary pathway of Ceres. References : (1) Russell, C. T. et al. 2016, Science. (2) De Sanctis M.C. et al., The VIR Spectrometer, 2011, Space Science Reviews. (3) De Sanctis M.C. et al. Ammoniated phyllosilicates on dwarf planet Ceres reveal an outer solar system origin, Nature, 2015. (4) King T. et al. (1992) Science, 255, 1551-1553. (5) Rivkin A.S. et al. (2006) Icarus, 185, 563-567. (6) Ammannito E. et al., Spectral diversity of Ceres surface as measured by VIR, 2016, Science. (7) De Sanctis et al. (2016), Nature

  5. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  6. Efficient error correction for next-generation sequencing of viral amplicons.

    Science.gov (United States)

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  7. Organizational safety culture and medical error reporting by Israeli nurses.

    Science.gov (United States)

    Kagan, Ilya; Barnoy, Sivia

    2013-09-01

    To investigate the association between patient safety culture (PSC) and the incidence and reporting rate of medical errors by Israeli nurses. Self-administered structured questionnaires were distributed to a convenience sample of 247 registered nurses enrolled in training programs at Tel Aviv University (response rate = 91%). The questionnaire's three sections examined the incidence of medication mistakes in clinical practice, the reporting rate for these errors, and the participants' views and perceptions of the safety culture in their workplace at three levels (organizational, departmental, and individual performance). Pearson correlation coefficients, t tests, and multiple regression analysis were used to analyze the data. Most nurses encountered medical errors from a daily to a weekly basis. Six percent of the sample never reported their own errors, while half reported their own errors "rarely or sometimes." The level of PSC was positively and significantly correlated with the error reporting rate. PSC, place of birth, error incidence, and not having an academic nursing degree were significant predictors of error reporting, together explaining 28% of variance. This study confirms the influence of an organizational safety climate on readiness to report errors. Senior healthcare executives and managers can make a major impact on safety culture development by creating and promoting a vision and strategy for quality and safety and fostering their employees' motivation to implement improvement programs at the departmental and individual level. A positive, carefully designed organizational safety culture can encourage error reporting by staff and so improve patient safety. © 2013 Sigma Theta Tau International.

  8. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  9. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  10. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    Science.gov (United States)

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  11. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  12. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  13. Hypothetical Outcome Plots Outperform Error Bars and Violin Plots for Inferences about Reliability of Variable Ordering.

    Science.gov (United States)

    Hullman, Jessica; Resnick, Paul; Adar, Eytan

    2015-01-01

    Many visual depictions of probability distributions, such as error bars, are difficult for users to accurately interpret. We present and study an alternative representation, Hypothetical Outcome Plots (HOPs), that animates a finite set of individual draws. In contrast to the statistical background required to interpret many static representations of distributions, HOPs require relatively little background knowledge to interpret. Instead, HOPs enables viewers to infer properties of the distribution using mental processes like counting and integration. We conducted an experiment comparing HOPs to error bars and violin plots. With HOPs, people made much more accurate judgments about plots of two and three quantities. Accuracy was similar with all three representations for most questions about distributions of a single quantity.

  14. Reconstructing the Surface Permittivity Distribution from Data Measured by the CONSERT Instrument aboard Rosetta: Method and Simulations

    Science.gov (United States)

    Plettemeier, D.; Statz, C.; Hegler, S.; Herique, A.; Kofman, W. W.

    2014-12-01

    One of the main scientific objectives of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard Rosetta is to perform a dielectric characterization of comet 67P/Chuyurmov-Gerasimenko's nucleus by means of a bi-static sounding between the lander Philae launched onto the comet's surface and the orbiter Rosetta. For the sounding, the lander part of CONSERT will receive and process the radio signal emitted by the orbiter part of the instrument and transmit a signal to the orbiter to be received by CONSERT. CONSERT will also be operated as bi-static RADAR during the descent of the lander Philae onto the comet's surface. From data measured during the descent, we aim at reconstructing a surface permittivity map of the comet at the landing site and along the path below the descent trajectory. This surface permittivity map will give information on the bulk material right below and around the landing site and the surface roughness in areas covered by the instrument along the descent. The proposed method to estimate the surface permittivity distribution is based on a least-squares based inversion approach in frequency domain. The direct problem of simulating the wave-propagation between lander and orbiter at line-of-sight and the signal reflected on the comet's surface is modelled using a dielectric physical optics approximation. Restrictions on the measurement positions by the descent orbitography and limitations on the instrument dynamic range will be dealt with by application of a regularization technique where the surface permittivity distribution and the gradient with regard to the permittivity is projected in a domain defined by a viable model of the spatial material and roughness distribution. The least-squares optimization step of the reconstruction is performed in such domain on a reduced set of parameters yielding stable results. The viability of the proposed method is demonstrated by reconstruction results based on simulated data.

  15. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  16. Medical Error Types and Causes Made by Nurses in Turkey

    Directory of Open Access Journals (Sweden)

    Dilek Kucuk Alemdar

    2013-06-01

    Full Text Available AIM: This study was carried out as a descriptive study in order to determine types, causes and prevalence of medical errors made by nurses in Turkey. METHOD: Seventy eight (78 nurses who have worked in a randomly selected hospital from five hospitals in Giresun city centre were enrolled in the study. The data was collected by the researchers using the ‘Information Form for Nurses’ and ‘Medical Error Form’. The Medical Error Form consists of 2 parts and 40 items including types and causes of medical errors. Nurses’ socio-demographic variables, medical error types and causes were evaluated using the percentage distribution and mean. RESULTS: The mean age of the nurses was 25.5 years, with a standard deviation 6.03 years. 50% of the nurses graduated health professional high school in the study. 53.8% of the nurses are single, 63.1% worked between 1-5 years, 71.8% day and night shifts and 42.3% in medical clinics. The common types of medical errors were hospital infection rate of 15.4%, diagnostic errors 12.8%, needle or cutting tool injuries and problems related to drug usage which has side effects 10.3%. In the study 38.5% of the nurses reported that they thought the cause of medical error highly was tiredness, 36.4% increased workload and 34.6% long working hours. CONCLUSION: As a result of the present study, nurses mentioned hospital infection, diagnostic errors, needle or cutting tool injuries as the most common medical errors and fatigue, over work load and long working hours as the most common medical error reasons. [TAF Prev Med Bull 2013; 12(3.000: 307-314

  17. Impact of vegetation growth on urban surface temperature distribution

    International Nuclear Information System (INIS)

    Buyadi, S N A; Mohd, W M N W; Misni, A

    2014-01-01

    Earlier studies have indicated that, the temperature distribution in the urban area is significantly warmer than its surrounding suburban areas. The process of urbanization has created urban heat island (UHI). As a city expands, trees are cut down to accommodate commercial development, industrial areas, roads, and suburban growth. Trees or green areas normally play a vital role in mitigating the UHI effects especially in regulating high temperature in saturated urban areas. This study attempts to assess the effects of vegetation growth on land surface temperature (LST) distribution in urban areas. An area within the City of Shah Alam, Selangor has been selected as the study area. Land use/land cover and LST maps of two different dates are generated from Landsat 5 TM images of the year 1991 and 2009. Only five major land cover classes are considered in this study. Mono-window algorithm is used to generate the LST maps. Landsat 5 TM images are also used to generate the NDVI maps. Results from this study have shown that there are significant land use changes within the study area. Although the conversion of green areas into residential and commercial areas significantly increase the LST, matured trees will help to mitigate the effects of UHI

  18. Reducing the sensitivity of IMPT treatment plans to setup errors and range uncertainties via probabilistic treatment planning

    International Nuclear Information System (INIS)

    Unkelbach, Jan; Bortfeld, Thomas; Martin, Benjamin C.; Soukup, Martin

    2009-01-01

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be very sensitive to setup errors and range uncertainties. If these errors are not accounted for during treatment planning, the dose distribution realized in the patient may by strongly degraded compared to the planned dose distribution. The authors implemented the probabilistic approach to incorporate uncertainties directly into the optimization of an intensity modulated treatment plan. Following this approach, the dose distribution depends on a set of random variables which parameterize the uncertainty, as does the objective function used to optimize the treatment plan. The authors optimize the expected value of the objective function. They investigate IMPT treatment planning regarding range uncertainties and setup errors. They demonstrate that incorporating these uncertainties into the optimization yields qualitatively different treatment plans compared to conventional plans which do not account for uncertainty. The sensitivity of an IMPT plan depends on the dose contributions of individual beam directions. Roughly speaking, steep dose gradients in beam direction make treatment plans sensitive to range errors. Steep lateral dose gradients make plans sensitive to setup errors. More robust treatment plans are obtained by redistributing dose among different beam directions. This can be achieved by the probabilistic approach. In contrast, the safety margin approach as widely applied in photon therapy fails in IMPT and is neither suitable for handling range variations nor setup errors.

  19. Study of gain-coupled distributed feedback laser based on high order surface gain-coupled gratings

    Science.gov (United States)

    Gao, Feng; Qin, Li; Chen, Yongyi; Jia, Peng; Chen, Chao; Cheng, LiWen; Chen, Hong; Liang, Lei; Zeng, Yugang; Zhang, Xing; Wu, Hao; Ning, Yongqiang; Wang, Lijun

    2018-03-01

    Single-longitudinal-mode, gain-coupled distributed feedback (DFB) lasers based on high order surface gain-coupled gratings are achieved. Periodic surface metal p-contacts with insulated grooves realize gain-coupled mechanism. To enhance gain contrast in the quantum wells without the introduction of effective index-coupled effect, groove length and depth were well designed. Our devices provided a single longitudinal mode with the maximum CW output power up to 48.8 mW/facet at 971.31 nm at 250 mA without facet coating, 3dB linewidth (39 dB). Optical bistable characteristic was observed with a threshold current difference. Experimentally, devices with different cavity lengths were contrasted on power-current and spectrum characteristics. Due to easy fabrication technique and stable performance, it provides a method of fabricating practical gain-coupled distributed feedback lasers for commercial applications.

  20. Linear and Nonlinear Response of a Rotating Tokamak Plasma to a Resonant Error-Field

    Science.gov (United States)

    Fitzpatrick, Richard

    2014-10-01

    An in-depth investigation of the effect of a resonant error-field on a rotating, quasi-cylindrical, tokamak plasma is preformed within the context of resistive-MHD theory. General expressions for the response of the plasma at the rational surface to the error-field are derived in both the linear and nonlinear regimes, and the extents of these regimes mapped out in parameter space. Torque-balance equations are also obtained in both regimes. These equations are used to determine the steady-state plasma rotation at the rational surface in the presence of the error-field. It is found that, provided the intrinsic plasma rotation is sufficiently large, the torque-balance equations possess dynamically stable low-rotation and high-rotation solution branches, separated by a forbidden band of dynamically unstable solutions. Moreover, bifurcations between the two stable solution branches are triggered as the amplitude of the error-field is varied. A low- to high-rotation bifurcation is invariably associated with a significant reduction in the width of the magnetic island chain driven at the rational surface, and vice versa. General expressions for the bifurcation thresholds are derived, and their domains of validity mapped out in parameter space. This research was funded by the U.S. Department of Energy under Contract DE-FG02-04ER-54742.

  1. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    Science.gov (United States)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  2. Analysis technique for controlling system wavefront error with active/adaptive optics

    Science.gov (United States)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  3. Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Florita, A.; Hodge, B. M.; Milligan, M.

    2012-08-01

    The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.

  4. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Degradation data analysis based on a generalized Wiener process subject to measurement error

    Science.gov (United States)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  6. Measurement of the magnetic field errors on TCV

    International Nuclear Information System (INIS)

    Piras, F.; Moret, J.-M.; Rossel, J.X.

    2010-01-01

    A set of 24 saddle loops is used on the Tokamak a Configuration Variable (TCV) to measure the radial magnetic flux at different toroidal and vertical positions. The new system is calibrated together with the standard magnetic diagnostics on TCV. Based on the results of this calibration, the effective current in the poloidal field coils and their position is computed. These corrections are then used to compute the distribution of the error field inside the vacuum vessel for a typical TCV discharge. Since the saddle loops measure the magnetic flux at different toroidal positions, the non-axisymmetric error field is also estimated and correlated to a shift or a tilt of the poloidal field coils.

  7. Distributed Surface Force

    Science.gov (United States)

    2014-06-01

    surface missile SSN nuclear powered attack submarine ST Singapore Technologies T-AKE Lewis and Clarke class TDSI Temasek Defense Systems Institute TRL...total of 313 (Department of the Navy N8 Department 2013). This 306-ship plan includes 12 SSBNs, 48 SSNs , 11 aircraft carriers, 88 cruisers and...MoCil • • West Reef Barque Canada Stloal frinc2 of wales Bank • Reef fat Grair9r Bani< Aneoyna Cltf •Mari\\eles • • • • Nlenwl Bank Ardaser n Dalas

  8. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  9. Electric Field Distribution and Switching Impulse Discharge under Shield Ball Surface Scratch Defect in an UHVDC Hall

    Directory of Open Access Journals (Sweden)

    Jianghai Geng

    2018-05-01

    Full Text Available The dimension and surface state of shielding fittings in ultra high voltage direct current (UHVDC converter station valve halls have a great influence on their surface electric field and switching impulse characteristics, which are important parameters confirming the air gap distance in the valve hall. The characteristics of impulse discharge under different lengths, dent degrees and burrs around the scratches of Φ1.3 m shield balls with a 2 m sphere-plane gap length were tested, in the UHVDC testing base of the Hebei Electric Power Research Institute. The discharge characteristics under the influence of the surface scratches of the shield ball were obtained. The results demonstrate that the discharge voltage of sphere-plane gap decreases obviously when there are unpolished scratches on the surface of the shield ball. However, when the scratches are polished, the discharge voltage has no significant impact. At the same time, a 1:1 full-scale impulse test model was established based on the finite element method. The electric field intensity and the space electric field distribution of the shield ball were obtained under the influence of scratches with or without burrs. The results of the simulation show that when the surface of the shield ball is smooth, the electric field distribution around it is even. The electric field intensity on the surface of the shield ball increases obviously when there are burrs around the scratches. When there is no burr around the scratches, the length and depth of the scratches have no obvious effect on its electric field distribution. Meanwhile, calculation results are consistent with test results. The results can provide an important basis for the design and optimization of shielding fittings, and technical support for its localization.

  10. JLab SRF Cavity Fabrication Errors, Consequences and Lessons Learned

    International Nuclear Information System (INIS)

    Marhauser, Frank

    2011-01-01

    Today, elliptical superconducting RF (SRF) cavities are preferably made from deep-drawn niobium sheets as pursued at Jefferson Laboratory (JLab). The fabrication of a cavity incorporates various cavity cell machining, trimming and electron beam welding (EBW) steps as well as surface chemistry that add to forming errors creating geometrical deviations of the cavity shape from its design. An analysis of in-house built cavities over the last years revealed significant errors in cavity production. Past fabrication flaws are described and lessons learned applied successfully to the most recent in-house series production of multi-cell cavities.

  11. Device for measuring the two-dimensional distribution of a radioactive substance on a surface

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A device is described by which, using a one-dimensional measuring proportional counter tube depending on position, one can measure the two-dimensionally distributed radioactivity of a surface and can plot this to scale two-dimensionally, after computer processing, or can show it two-dimensionally on a monitor. (orig.) [de

  12. Reliability-Based Marginal Cost Pricing Problem Case with Both Demand Uncertainty and Travelers’ Perception Errors

    Directory of Open Access Journals (Sweden)

    Shaopeng Zhong

    2013-01-01

    Full Text Available Focusing on the first-best marginal cost pricing (MCP in a stochastic network with both travel demand uncertainty and stochastic perception errors within the travelers’ route choice decision processes, this paper develops a perceived risk-based stochastic network marginal cost pricing (PRSN-MCP model. Numerical examples based on an integrated method combining the moment analysis approach, the fitting distribution method, and the reliability measures are also provided to demonstrate the importance and properties of the proposed model. The main finding is that ignoring the effect of travel time reliability and travelers’ perception errors may significantly reduce the performance of the first-best MCP tolls, especially under high travelers’ confidence and network congestion levels. The analysis result could also enhance our understanding of (1 the effect of stochastic perception error (SPE on the perceived travel time distribution and the components of road toll; (2 the effect of road toll on the actual travel time distribution and its reliability measures; (3 the effect of road toll on the total network travel time distribution and its statistics; and (4 the effect of travel demand level and the value of reliability (VoR level on the components of road toll.

  13. Hessian matrix approach for determining error field sensitivity to coil deviations

    Science.gov (United States)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  14. Analysis of measured data of human body based on error correcting frequency

    Science.gov (United States)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  15. An iterative model for the steady state current distribution in oxide-confined vertical-cavity surface-emitting lasers (VCSELs)

    Science.gov (United States)

    Chuang, Hsueh-Hua

    The purpose of this dissertation is to develop an iterative model for the analysis of the current distribution in vertical-cavity surface-emitting lasers (VCSELs) using a circuit network modeling approach. This iterative model divides the VCSEL structure into numerous annular elements and uses a circuit network consisting of resistors and diodes. The measured sheet resistance of the p-distributed Bragg reflector (DBR), the measured sheet resistance of the layers under the oxide layer, and two empirical adjustable parameters are used as inputs to the iterative model to determine the resistance of each resistor. The two empirical values are related to the anisotropy of the resistivity of the p-DBR structure. The spontaneous current, stimulated current, and surface recombination current are accounted for by the diodes. The lateral carrier transport in the quantum well region is analyzed using drift and diffusion currents. The optical gain is calculated as a function of wavelength and carrier density from fundamental principles. The predicted threshold current densities for these VCSELs match the experimentally measured current densities over the wavelength range of 0.83 mum to 0.86 mum with an error of less than 5%. This model includes the effects of the resistance of the p-DBR mirrors, the oxide current-confining layer and spatial hole burning. Our model shows that higher sheet resistance under the oxide layer reduces the threshold current, but also reduces the current range over which single transverse mode operation occurs. The spatial hole burning profile depends on the lateral drift and diffusion of carriers in the quantum wells but is dominated by the voltage drop across the p-DBR region. To my knowledge, for the first time, the drift current and the diffusion current are treated separately. Previous work uses an ambipolar approach, which underestimates the total charge transferred in the quantum well region, especially under the oxide region. However, the total

  16. Optimized method for manufacturing large aspheric surfaces

    Science.gov (United States)

    Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui

    2007-12-01

    Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

  17. Biogeographical distribution and diversity of bacterial communities in surface sediments of the South China Sea.

    Science.gov (United States)

    Li, Tao; Wang, Peng

    2013-05-01

    This paper aims at an investigation of the features of bacterial communities in surface sediments of the South China Sea (SCS). In particular, biogeographical distribution patterns and the phylogenetic diversity of bacteria found in sediments collected from a coral reef platform, a continental slope, and a deep-sea basin were determined. Bacterial diversity was measured by an observation of 16S rRNA genes, and 18 phylogenetic groups were identified in the bacterial clone library. Planctomycetes, Deltaproteobacteria, candidate division OP11, and Alphaproteobacteria made up the majority of the bacteria in the samples, with their mean bacterial clones being 16%, 15%, 12%, and 9%, respectively. By comparison, the bacterial communities found in the SCS surface sediments were significantly different from other previously observed deep-sea bacterial communities. This research also emphasizes the fact that geographical factors have an impact on the biogeographical distribution patterns of bacterial communities. For instance, canonical correspondence analyses illustrated that the percentage of sand weight and water depth are important factors affecting the bacterial community composition. Therefore, this study highlights the importance of adequately determining the relationship between geographical factors and the distribution of bacteria in the world's seas and oceans.

  18. Teamwork and clinical error reporting among nurses in Korean hospitals.

    Science.gov (United States)

    Hwang, Jee-In; Ahn, Jeonghoon

    2015-03-01

    To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.

  19. Distribution of 137Cs in surface seawater and sediment around Sabahs Sulu-Sulawesi Sea

    International Nuclear Information System (INIS)

    Mohd Izwan Abdul Aziz; Ahmad Sanadi Abu Bakar; Yii, Mei Wo; Nurrul Assyikeen Jaffary; Zaharudin Ahmad

    2010-01-01

    The studies on distribution of 137 Cs in surface seawater and sediment around Sabahs Sulu-Sulawesi Sea were carried out during Ekspedisi Pelayaran Saintifik Perdana (EPSP) in July 2009. About sixteen and twenty five sampling locations were identified for surface seawater and sediment respectively in Sabahs Sulu-Sulawesi Sea. Large volumes of seawater samples are collected and co-precipitation technique was employed to concentrate cesium content before known amounts of 134 Cs tracer were added as yield determinant. Grab sampler were used to collect surface sediment sample. The caesium precipitate and sediment were dried and finely ground before counted using gamma-ray spectrometry system at 661 keV. The activity of 137 Cs was found in surface seawater and sediment to be in the range 1.73 Bq/ m 3 to 5.50 Bq/ m 3 and 1.15 Bq/ kg to 4.53 Bq/ kg respectively. (author)

  20. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.