WorldWideScience

Sample records for improved accuracy based

  1. Hybrid Indoor-Based WLAN-WSN Localization Scheme for Improving Accuracy Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Zahid Farid

    2016-01-01

    Full Text Available In indoor environments, WiFi (RSS based localization is sensitive to various indoor fading effects and noise during transmission, which are the main causes of localization errors that affect its accuracy. Keeping in view those fading effects, positioning systems based on a single technology are ineffective in performing accurate localization. For this reason, the trend is toward the use of hybrid positioning systems (combination of two or more wireless technologies in indoor/outdoor localization scenarios for getting better position accuracy. This paper presents a hybrid technique to implement indoor localization that adopts fingerprinting approaches in both WiFi and Wireless Sensor Networks (WSNs. This model exploits machine learning, in particular Artificial Natural Network (ANN techniques, for position calculation. The experimental results show that the proposed hybrid system improved the accuracy, reducing the average distance error to 1.05 m by using ANN. Applying Genetic Algorithm (GA based optimization technique did not incur any further improvement to the accuracy. Compared to the performance of GA optimization, the nonoptimized ANN performed better in terms of accuracy, precision, stability, and computational time. The above results show that the proposed hybrid technique is promising for achieving better accuracy in real-world positioning applications.

  2. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  3. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    Directory of Open Access Journals (Sweden)

    Mingjun Deng

    2017-12-01

    Full Text Available The Chinese Gaofen-3 (GF-3 mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method.

  4. Improving Accuracy of Processing Through Active Control

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available An important task of modern mathematical statistics with its methods based on the theory of probability is a scientific estimate of measurement results. There are certain costs under control, and under ineffective control when a customer has got defective products these costs are significantly higher because of parts recall.When machining the parts, under the influence of errors a range scatter of part dimensions is offset towards the tolerance limit. To improve a processing accuracy and avoid defective products involves reducing components of error in machining, i.e. to improve the accuracy of machine and tool, tool life, rigidity of the system, accuracy of the adjustment. In a given time it is also necessary to adapt machine.To improve an accuracy and a machining rate there, currently  become extensively popular various the in-process gaging devices and controlled machining that uses adaptive control systems for the process monitoring. Improving the accuracy in this case is compensation of a majority of technological errors. The in-cycle measuring sensors (sensors of active control allow processing accuracy improvement by one or two quality and provide a capability for simultaneous operation of several machines.Efficient use of in-cycle measuring sensors requires development of methods to control the accuracy through providing the appropriate adjustments. Methods based on the moving average, appear to be the most promising for accuracy control since they include data on the change in some last measured values of the parameter under control.

  5. Geometric Positioning Accuracy Improvement of ZY-3 Satellite Imagery Based on Statistical Learning Theory

    Directory of Open Access Journals (Sweden)

    Niangang Jiao

    2018-05-01

    Full Text Available With the increasing demand for high-resolution remote sensing images for mapping and monitoring the Earth’s environment, geometric positioning accuracy improvement plays a significant role in the image preprocessing step. Based on the statistical learning theory, we propose a new method to improve the geometric positioning accuracy without ground control points (GCPs. Multi-temporal images from the ZY-3 satellite are tested and the bias-compensated rational function model (RFM is applied as the block adjustment model in our experiment. An easy and stable weight strategy and the fast iterative shrinkage-thresholding (FIST algorithm which is widely used in the field of compressive sensing are improved and utilized to define the normal equation matrix and solve it. Then, the residual errors after traditional block adjustment are acquired and tested with the newly proposed inherent error compensation model based on statistical learning theory. The final results indicate that the geometric positioning accuracy of ZY-3 satellite imagery can be improved greatly with our proposed method.

  6. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Directory of Open Access Journals (Sweden)

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  7. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    Science.gov (United States)

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  8. Improvement of vision measurement accuracy using Zernike moment based edge location error compensation model

    International Nuclear Information System (INIS)

    Cui, J W; Tan, J B; Zhou, Y; Zhang, H

    2007-01-01

    This paper presents the Zernike moment based model developed to compensate edge location errors for further improvement of the vision measurement accuracy by compensating the slight changes resulting from sampling and establishing mathematic expressions for subpixel location of theoretical and actual edges which are either vertical to or at an angle with X-axis. Experimental results show that the proposed model can be used to achieve a vision measurement accuracy of up to 0.08 pixel while the measurement uncertainty is less than 0.36μm. It is therefore concluded that as a model which can be used to achieve a significant improvement of vision measurement accuracy, the proposed model is especially suitable for edge location of images with low contrast

  9. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    Science.gov (United States)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  10. Optical vector network analyzer with improved accuracy based on polarization modulation and polarization pulling.

    Science.gov (United States)

    Li, Wei; Liu, Jian Guo; Zhu, Ning Hua

    2015-04-15

    We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.

  11. Accuracy improvement in the TDR-based localization of water leaks

    Directory of Open Access Journals (Sweden)

    Andrea Cataldo

    Full Text Available A time domain reflectometry (TDR-based system for the localization of water leaks has been recently developed by the authors. This system, which employs wire-like sensing elements to be installed along the underground pipes, has proven immune to the limitations that affect the traditional, acoustic leak-detection systems.Starting from the positive results obtained thus far, in this work, an improvement of this TDR-based system is proposed. More specifically, the possibility of employing a low-cost, water-absorbing sponge to be placed around the sensing element for enhancing the accuracy in the localization of the leak is addressed.To this purpose, laboratory experiments were carried out mimicking a water leakage condition, and two sensing elements (one embedded in a sponge and one without sponge were comparatively used to identify the position of the leak through TDR measurements. Results showed that, thanks to the water retention capability of the sponge (which maintains the leaked water more localized, the sensing element embedded in the sponge leads to a higher accuracy in the evaluation of the position of the leak. Keywords: Leak localization, TDR, Time domain reflectometry, Water leaks, Underground water pipes

  12. Improving shuffler assay accuracy

    International Nuclear Information System (INIS)

    Rinard, P.M.

    1995-01-01

    Drums of uranium waste should be disposed of in an economical and environmentally sound manner. The most accurate possible assays of the uranium masses in the drums are required for proper disposal. The accuracies of assays from a shuffler are affected by the type of matrix material in the drums. Non-hydrogenous matrices have little effect on neutron transport and accuracies are very good. If self-shielding is known to be a minor problem, good accuracies are also obtained with hydrogenous matrices when a polyethylene sleeve is placed around the drums. But for those cases where self-shielding may be a problem, matrices are hydrogenous, and uranium distributions are non-uniform throughout the drums, the accuracies are degraded. They can be greatly improved by determining the distributions of the uranium and then applying correction factors based on the distributions. This paper describes a technique for determining uranium distributions by using the neutron count rates in detector banks around the waste drum and solving a set of overdetermined linear equations. Other approaches were studied to determine the distributions and are described briefly. Implementation of this correction is anticipated on an existing shuffler next year

  13. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    Science.gov (United States)

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  14. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Directory of Open Access Journals (Sweden)

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  15. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    OpenAIRE

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course corr...

  16. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  17. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  18. Statewide Quality Improvement Initiative to Reduce Early Elective Deliveries and Improve Birth Registry Accuracy.

    Science.gov (United States)

    Kaplan, Heather C; King, Eileen; White, Beth E; Ford, Susan E; Fuller, Sandra; Krew, Michael A; Marcotte, Michael P; Iams, Jay D; Bailit, Jennifer L; Bouchard, Jo M; Friar, Kelly; Lannon, Carole M

    2018-04-01

    To evaluate the success of a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data accuracy rapidly and at scale in Ohio. Between February 2013 and March 2014, participating hospitals were involved in a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data. This initiative was designed as a learning collaborative model (group webinars and a single face-to-face meeting) and included individual quality improvement coaching. It was implemented using a stepped wedge design with hospitals divided into three balanced groups (waves) participating in the initiative sequentially. Birth registry data were used to assess hospital rates of nonmedically indicated inductions at less than 39 weeks of gestation. Comparisons were made between groups participating and those not participating in the initiative at two time points. To measure birth registry accuracy, hospitals conducted monthly audits comparing birth registry data with the medical record. Associations were assessed using generalized linear repeated measures models accounting for time effects. Seventy of 72 (97%) eligible hospitals participated. Based on birth registry data, nonmedically indicated inductions at less than 39 weeks of gestation declined in all groups with implementation (wave 1: 6.2-3.2%, Pinitiative, they saw significant decreases in rates of early elective deliveries as compared with wave 3 (control; P=.018). All waves had significant improvement in birth registry accuracy (wave 1: 80-90%, P=.017; wave 2: 80-100%, P=.002; wave 3: 75-100%, Pinitiative enabled statewide spread of change strategies to decrease early elective deliveries and improve birth registry accuracy over 14 months and could be used for rapid dissemination of other evidence-based obstetric care practices across states or hospital systems.

  19. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  20. Improving orbit prediction accuracy through supervised machine learning

    Science.gov (United States)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  1. Improving Accuracy of Processing by Adaptive Control Techniques

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available When machining the work-pieces a range of scatter of the work-piece dimensions to the tolerance limit is displaced in response to the errors. To improve an accuracy of machining and prevent products from defects it is necessary to diminish the machining error components, i.e. to improve the accuracy of machine tool, tool life, rigidity of the system, accuracy of adjustment. It is also necessary to provide on-machine adjustment after a certain time. However, increasing number of readjustments reduces the performance and high machine and tool requirements lead to a significant increase in the machining cost.To improve the accuracy and machining rate, various devices of active control (in-process gaging devices, as well as controlled machining through adaptive systems for a technological process control now become widely used. Thus, the accuracy improvement in this case is reached by compensation of a majority of technological errors. The sensors of active control can provide improving the accuracy of processing by one or two quality classes, and simultaneous operation of several machines.For efficient use of sensors of active control it is necessary to develop the accuracy control methods by means of introducing the appropriate adjustments to solve this problem. Methods based on the moving average, appear to be the most promising for accuracy control, since they contain information on the change in the last several measured values of the parameter under control.When using the proposed method in calculation, the first three members of the sequence of deviations remain unchanged, therefore 1 1 x  x , 2 2 x  x , 3 3 x  x Then, for each i-th member of the sequence we calculate that way: , ' i i i x  x  k x , where instead of the i x values will be populated with the corresponding values ' i x calculated as an average of three previous members:3 ' 1  2  3  i i i i x x x x .As a criterion for the estimate of the control

  2. Improving Intensity-Based Lung CT Registration Accuracy Utilizing Vascular Information

    Directory of Open Access Journals (Sweden)

    Kunlin Cao

    2012-01-01

    Full Text Available Accurate pulmonary image registration is a challenging problem when the lungs have a deformation with large distance. In this work, we present a nonrigid volumetric registration algorithm to track lung motion between a pair of intrasubject CT images acquired at different inflation levels and introduce a new vesselness similarity cost that improves intensity-only registration. Volumetric CT datasets from six human subjects were used in this study. The performance of four intensity-only registration algorithms was compared with and without adding the vesselness similarity cost function. Matching accuracy was evaluated using landmarks, vessel tree, and fissure planes. The Jacobian determinant of the transformation was used to reveal the deformation pattern of local parenchymal tissue. The average matching error for intensity-only registration methods was on the order of 1 mm at landmarks and 1.5 mm on fissure planes. After adding the vesselness preserving cost function, the landmark and fissure positioning errors decreased approximately by 25% and 30%, respectively. The vesselness cost function effectively helped improve the registration accuracy in regions near thoracic cage and near the diaphragm for all the intensity-only registration algorithms tested and also helped produce more consistent and more reliable patterns of regional tissue deformation.

  3. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  4. Accuracy improvements of gyro-based measurement-while-drilling surveying instruments by a laser testing method

    Science.gov (United States)

    Li, Rong; Zhao, Jianhui; Li, Fan

    2009-07-01

    Gyroscope used as surveying sensor in the oil industry has been proposed as a good technique for measurement-whiledrilling (MWD) to provide real-time monitoring of the position and the orientation of the bottom hole assembly (BHA).However, drifts in the measurements provided by gyroscope might be prohibitive for the long-term utilization of the sensor. Some usual methods such as zero velocity update procedure (ZUPT) introduced to limit these drifts seem to be time-consuming and with limited effect. This study explored an in-drilling dynamic -alignment (IDA) method for MWD which utilizes gyroscope. During a directional drilling process, there are some minutes in the rotary drilling mode when the drill bit combined with drill pipe are rotated about the spin axis in a certain speed. This speed can be measured and used to determine and limit some drifts of the gyroscope which pay great effort to the deterioration in the long-term performance. A novel laser assembly is designed on the wellhead to count the rotating cycles of the drill pipe. With this provided angular velocity of the drill pipe, drifts of gyroscope measurements are translated into another form that can be easy tested and compensated. That allows better and faster alignment and limited drifts during the navigation process both of which can reduce long-term navigation errors, thus improving the overall accuracy in INS-based MWD system. This article concretely explores the novel device on the wellhead designed to test the rotation of the drill pipe. It is based on laser testing which is simple and not expensive by adding a laser emitter to the existing drilling equipment. Theoretical simulations and analytical approximations exploring the IDA idea have shown improvement in the accuracy of overall navigation and reduction in the time required to achieve convergence. Gyroscope accuracy along the axis is mainly improved. It is suggested to use the IDA idea in the rotary mode for alignment. Several other

  5. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  6. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Science.gov (United States)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  7. Improving the accuracy of brain tumor surgery via Raman-based technology.

    Science.gov (United States)

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W; Sunney Xie, X; Orringer, Daniel A

    2016-03-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors.

  8. Improving coding accuracy in an academic practice.

    Science.gov (United States)

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  9. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  10. Going Vertical To Improve the Accuracy of Atomic Force Microscopy Based Single-Molecule Force Spectroscopy.

    Science.gov (United States)

    Walder, Robert; Van Patten, William J; Adhikari, Ayush; Perkins, Thomas T

    2018-01-23

    Single-molecule force spectroscopy (SMFS) is a powerful technique to characterize the energy landscape of individual proteins, the mechanical properties of nucleic acids, and the strength of receptor-ligand interactions. Atomic force microscopy (AFM)-based SMFS benefits from ongoing progress in improving the precision and stability of cantilevers and the AFM itself. Underappreciated is that the accuracy of such AFM studies remains hindered by inadvertently stretching molecules at an angle while measuring only the vertical component of the force and extension, degrading both measurements. This inaccuracy is particularly problematic in AFM studies using double-stranded DNA and RNA due to their large persistence length (p ≈ 50 nm), often limiting such studies to other SMFS platforms (e.g., custom-built optical and magnetic tweezers). Here, we developed an automated algorithm that aligns the AFM tip above the DNA's attachment point to a coverslip. Importantly, this algorithm was performed at low force (10-20 pN) and relatively fast (15-25 s), preserving the connection between the tip and the target molecule. Our data revealed large uncorrected lateral offsets for 100 and 650 nm DNA molecules [24 ± 18 nm (mean ± standard deviation) and 180 ± 110 nm, respectively]. Correcting this offset yielded a 3-fold improvement in accuracy and precision when characterizing DNA's overstretching transition. We also demonstrated high throughput by acquiring 88 geometrically corrected force-extension curves of a single individual 100 nm DNA molecule in ∼40 min and versatility by aligning polyprotein- and PEG-based protein-ligand assays. Importantly, our software-based algorithm was implemented on a commercial AFM, so it can be broadly adopted. More generally, this work illustrates how to enhance AFM-based SMFS by developing more sophisticated data-acquisition protocols.

  11. Improved accuracy of multiple ncRNA alignment by incorporating structural information into a MAFFT-based framework

    Directory of Open Access Journals (Sweden)

    Toh Hiroyuki

    2008-04-01

    Full Text Available Abstract Background Structural alignment of RNAs is becoming important, since the discovery of functional non-coding RNAs (ncRNAs. Recent studies, mainly based on various approximations of the Sankoff algorithm, have resulted in considerable improvement in the accuracy of pairwise structural alignment. In contrast, for the cases with more than two sequences, the practical merit of structural alignment remains unclear as compared to traditional sequence-based methods, although the importance of multiple structural alignment is widely recognized. Results We took a different approach from a straightforward extension of the Sankoff algorithm to the multiple alignments from the viewpoints of accuracy and time complexity. As a new option of the MAFFT alignment program, we developed a multiple RNA alignment framework, X-INS-i, which builds a multiple alignment with an iterative method incorporating structural information through two components: (1 pairwise structural alignments by an external pairwise alignment method such as SCARNA or LaRA and (2 a new objective function, Four-way Consistency, derived from the base-pairing probability of every sub-aligned group at every multiple alignment stage. Conclusion The BRAliBASE benchmark showed that X-INS-i outperforms other methods currently available in the sum-of-pairs score (SPS criterion. As a basis for predicting common secondary structure, the accuracy of the present method is comparable to or rather higher than those of the current leading methods such as RNA Sampler. The X-INS-i framework can be used for building a multiple RNA alignment from any combination of algorithms for pairwise RNA alignment and base-pairing probability. The source code is available at the webpage found in the Availability and requirements section.

  12. Multi-scale hippocampal parcellation improves atlas-based segmentation accuracy

    Science.gov (United States)

    Plassard, Andrew J.; McHugo, Maureen; Heckers, Stephan; Landman, Bennett A.

    2017-02-01

    Known for its distinct role in memory, the hippocampus is one of the most studied regions of the brain. Recent advances in magnetic resonance imaging have allowed for high-contrast, reproducible imaging of the hippocampus. Typically, a trained rater takes 45 minutes to manually trace the hippocampus and delineate the anterior from the posterior segment at millimeter resolution. As a result, there has been a significant desire for automated and robust segmentation of the hippocampus. In this work we use a population of 195 atlases based on T1-weighted MR images with the left and right hippocampus delineated into the head and body. We initialize the multi-atlas segmentation to a region directly around each lateralized hippocampus to both speed up and improve the accuracy of registration. This initialization allows for incorporation of nearly 200 atlases, an accomplishment which would typically involve hundreds of hours of computation per target image. The proposed segmentation results in a Dice similiarity coefficient over 0.9 for the full hippocampus. This result outperforms a multi-atlas segmentation using the BrainCOLOR atlases (Dice 0.85) and FreeSurfer (Dice 0.75). Furthermore, the head and body delineation resulted in a Dice coefficient over 0.87 for both structures. The head and body volume measurements also show high reproducibility on the Kirby 21 reproducibility population (R2 greater than 0.95, p develop a robust tool for measurement of the hippocampus and other temporal lobe structures.

  13. Improving Accuracy of Dempster-Shafer Theory Based Anomaly Detection Systems

    Directory of Open Access Journals (Sweden)

    Ling Zou

    2014-07-01

    Full Text Available While the Dempster-Shafer theory of evidence has been widely used in anomaly detection, there are some issues with them. Dempster-Shafer theory of evidence trusts evidences equally which does not hold in distributed-sensor ADS. Moreover, evidences are dependent with each other sometimes which will lead to false alert. We propose improving by incorporating two algorithms. Features selection algorithm employs Gaussian Graphical Models to discover correlation between some candidate features. A group of suitable ADS were selected to detect and detection result were send to the fusion engine. Information gain is applied to set weight for every feature on Weights estimated algorithm. A weighted Dempster-Shafer theory of evidence combined the detection results to achieve a better accuracy. We evaluate our detection prototype through a set of experiments that were conducted with standard benchmark Wisconsin Breast Cancer Dataset and real Internet traffic. Evaluations on the Wisconsin Breast Cancer Dataset show that our prototype can find the correlation in nine features and improve the detection rate without affecting the false positive rate. Evaluations on Internet traffic show that Weights estimated algorithm can improve the detection performance significantly.

  14. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    Science.gov (United States)

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  15. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  16. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    Science.gov (United States)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  17. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Seoung-Hyeon Lee

    2016-01-01

    Full Text Available Beacons using bluetooth low-energy (BLE technology have emerged as a new paradigm of indoor positioning service (IPS because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy of beacon-based indoor positioning technology by fusing it with existing indoor positioning technology, which uses Wi-Fi, ZigBee, and so forth. This study proposes a beacon-based indoor positioning method using an extended Kalman filter that recursively processes input data including noise. After defining the movement of a smartphone on a flat two-dimensional surface, it was assumed that the beacon signal is nonlinear. Then, the standard deviation and properties of the beacon signal were analyzed. According to the analysis results, an extended Kalman filter was designed and the accuracy of the smartphone’s indoor position was analyzed through simulations and tests. The proposed technique achieved good indoor positioning accuracy, with errors of 0.26 m and 0.28 m from the average x- and y-coordinates, respectively, based solely on the beacon signal.

  18. Improving diagnostic accuracy using agent-based distributed data mining system.

    Science.gov (United States)

    Sridhar, S

    2013-09-01

    The use of data mining techniques to improve the diagnostic system accuracy is investigated in this paper. The data mining algorithms aim to discover patterns and extract useful knowledge from facts recorded in databases. Generally, the expert systems are constructed for automating diagnostic procedures. The learning component uses the data mining algorithms to extract the expert system rules from the database automatically. Learning algorithms can assist the clinicians in extracting knowledge automatically. As the number and variety of data sources is dramatically increasing, another way to acquire knowledge from databases is to apply various data mining algorithms that extract knowledge from data. As data sets are inherently distributed, the distributed system uses agents to transport the trained classifiers and uses meta learning to combine the knowledge. Commonsense reasoning is also used in association with distributed data mining to obtain better results. Combining human expert knowledge and data mining knowledge improves the performance of the diagnostic system. This work suggests a framework of combining the human knowledge and knowledge gained by better data mining algorithms on a renal and gallstone data set.

  19. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  20. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    Science.gov (United States)

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  1. CADASTRAL POSITIONING ACCURACY IMPROVEMENT: A CASE STUDY IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    N. M. Hashim

    2016-09-01

    Full Text Available Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM. With the growth of spatial based technology especially Geographical Information System (GIS, DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI in cadastral database modernization.

  2. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  3. Application of FFTBM with signal mirroring to improve accuracy assessment of MELCOR code

    International Nuclear Information System (INIS)

    Saghafi, Mahdi; Ghofrani, Mohammad Bagher; D’Auria, Francesco

    2016-01-01

    Highlights: • FFTBM-SM is an improved Fast Fourier Transform Base Method by signal mirroring. • FFTBM-SM has been applied to accuracy assessment of MELCOR code predictions. • The case studied was Station Black-Out accident in PSB-VVER integral test facility. • FFTBM-SM eliminates fluctuations of accuracy indices when signals sharply change. • Accuracy assessment is performed in a more realistic and consistent way by FFTBM-SM. - Abstract: This paper deals with the application of Fast Fourier Transform Base Method (FFTBM) with signal mirroring (FFTBM-SM) to assess accuracy of MELCOR code. This provides deeper insights into how the accuracy of MELCOR code in predictions of thermal-hydraulic parameters varies during transients. The case studied was modeling of Station Black-Out (SBO) accident in PSB-VVER integral test facility by MELCOR code. The accuracy of this thermal-hydraulic modeling was previously quantified using original FFTBM in a few number of time-intervals, based on phenomenological windows of SBO accident. Accuracy indices calculated by original FFTBM in a series of time-intervals unreasonably fluctuate when the investigated signals sharply increase or decrease. In the current study, accuracy of MELCOR code is quantified using FFTBM-SM in a series of increasing time-intervals, and the results are compared to those with original FFTBM. Also, differences between the accuracy indices of original FFTBM and FFTBM-SM are investigated and correction factors calculated to eliminate unphysical effects in original FFTBM. The main findings are: (1) replacing limited number of phenomena-based time-intervals by a series of increasing time-intervals provides deeper insights about accuracy variation of the MELCOR calculations, and (2) application of FFTBM-SM for accuracy evaluation of the MELCOR predictions, provides more reliable results than original FFTBM by eliminating the fluctuations of accuracy indices when experimental signals sharply increase or

  4. Application of FFTBM with signal mirroring to improve accuracy assessment of MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Saghafi, Mahdi [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); Ghofrani, Mohammad Bagher, E-mail: ghofrani@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); D’Auria, Francesco [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, San Piero a Grado, Pisa (Italy)

    2016-11-15

    Highlights: • FFTBM-SM is an improved Fast Fourier Transform Base Method by signal mirroring. • FFTBM-SM has been applied to accuracy assessment of MELCOR code predictions. • The case studied was Station Black-Out accident in PSB-VVER integral test facility. • FFTBM-SM eliminates fluctuations of accuracy indices when signals sharply change. • Accuracy assessment is performed in a more realistic and consistent way by FFTBM-SM. - Abstract: This paper deals with the application of Fast Fourier Transform Base Method (FFTBM) with signal mirroring (FFTBM-SM) to assess accuracy of MELCOR code. This provides deeper insights into how the accuracy of MELCOR code in predictions of thermal-hydraulic parameters varies during transients. The case studied was modeling of Station Black-Out (SBO) accident in PSB-VVER integral test facility by MELCOR code. The accuracy of this thermal-hydraulic modeling was previously quantified using original FFTBM in a few number of time-intervals, based on phenomenological windows of SBO accident. Accuracy indices calculated by original FFTBM in a series of time-intervals unreasonably fluctuate when the investigated signals sharply increase or decrease. In the current study, accuracy of MELCOR code is quantified using FFTBM-SM in a series of increasing time-intervals, and the results are compared to those with original FFTBM. Also, differences between the accuracy indices of original FFTBM and FFTBM-SM are investigated and correction factors calculated to eliminate unphysical effects in original FFTBM. The main findings are: (1) replacing limited number of phenomena-based time-intervals by a series of increasing time-intervals provides deeper insights about accuracy variation of the MELCOR calculations, and (2) application of FFTBM-SM for accuracy evaluation of the MELCOR predictions, provides more reliable results than original FFTBM by eliminating the fluctuations of accuracy indices when experimental signals sharply increase or

  5. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    Science.gov (United States)

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  6. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    Science.gov (United States)

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Accuracy Improvement of Real-Time Location Tracking for Construction Workers

    Directory of Open Access Journals (Sweden)

    Hyunsoo Kim

    2018-05-01

    Full Text Available Extensive research has been conducted on the real-time locating system (RTLS for tracking construction components, including workers, equipment, and materials, in order to improve construction performance (e.g., productivity improvement or accident prevention. In order to prevent safety accidents and make more sustainable construction job sites, the higher accuracy of RTLS is required. To improve the accuracy of RTLS in construction projects, this paper presents a RTLS using radio frequency identification (RFID. For this goal, this paper develops a location tracking error mitigation algorithm and presents the concept of using assistant tags. The applicability and effectiveness of the developed RTLS are tested under eight different construction environments and the test results confirm the system’s strong potential for improving the accuracy of real-time location tracking in construction projects, thus enhancing construction performance.

  8. Improvement of the accuracy of noise measurements by the two-amplifier correlation method.

    Science.gov (United States)

    Pellegrini, B; Basso, G; Fiori, G; Macucci, M; Maione, I A; Marconcini, P

    2013-10-01

    We present a novel method for device noise measurement, based on a two-channel cross-correlation technique and a direct "in situ" measurement of the transimpedance of the device under test (DUT), which allows improved accuracy with respect to what is available in the literature, in particular when the DUT is a nonlinear device. Detailed analytical expressions for the total residual noise are derived, and an experimental investigation of the increased accuracy provided by the method is performed.

  9. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    Science.gov (United States)

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  10. Improving the Accuracy of Cloud Detection Using Machine Learning

    Science.gov (United States)

    Craddock, M. E.; Alliss, R. J.; Mason, M.

    2017-12-01

    Cloud detection from geostationary satellite imagery has long been accomplished through multi-spectral channel differencing in comparison to the Earth's surface. The distinction of clear/cloud is then determined by comparing these differences to empirical thresholds. Using this methodology, the probability of detecting clouds exceeds 90% but performance varies seasonally, regionally and temporally. The Cloud Mask Generator (CMG) database developed under this effort, consists of 20 years of 4 km, 15minute clear/cloud images based on GOES data over CONUS and Hawaii. The algorithms to determine cloudy pixels in the imagery are based on well-known multi-spectral techniques and defined thresholds. These thresholds were produced by manually studying thousands of images and thousands of man-hours to determine the success and failure of the algorithms to fine tune the thresholds. This study aims to investigate the potential of improving cloud detection by using Random Forest (RF) ensemble classification. RF is the ideal methodology to employ for cloud detection as it runs efficiently on large datasets, is robust to outliers and noise and is able to deal with highly correlated predictors, such as multi-spectral satellite imagery. The RF code was developed using Python in about 4 weeks. The region of focus selected was Hawaii and includes the use of visible and infrared imagery, topography and multi-spectral image products as predictors. The development of the cloud detection technique is realized in three steps. First, tuning of the RF models is completed to identify the optimal values of the number of trees and number of predictors to employ for both day and night scenes. Second, the RF models are trained using the optimal number of trees and a select number of random predictors identified during the tuning phase. Lastly, the model is used to predict clouds for an independent time period than used during training and compared to truth, the CMG cloud mask. Initial results

  11. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    Science.gov (United States)

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  12. Improving the accuracy of protein secondary structure prediction using structural alignment

    Directory of Open Access Journals (Sweden)

    Gallin Warren J

    2006-06-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has steadily improved over the past 30 years. Now many secondary structure prediction methods routinely achieve an accuracy (Q3 of about 75%. We believe this accuracy could be further improved by including structure (as opposed to sequence database comparisons as part of the prediction process. Indeed, given the large size of the Protein Data Bank (>35,000 sequences, the probability of a newly identified sequence having a structural homologue is actually quite high. Results We have developed a method that performs structure-based sequence alignments as part of the secondary structure prediction process. By mapping the structure of a known homologue (sequence ID >25% onto the query protein's sequence, it is possible to predict at least a portion of that query protein's secondary structure. By integrating this structural alignment approach with conventional (sequence-based secondary structure methods and then combining it with a "jury-of-experts" system to generate a consensus result, it is possible to attain very high prediction accuracy. Using a sequence-unique test set of 1644 proteins from EVA, this new method achieves an average Q3 score of 81.3%. Extensive testing indicates this is approximately 4–5% better than any other method currently available. Assessments using non sequence-unique test sets (typical of those used in proteome annotation or structural genomics indicate that this new method can achieve a Q3 score approaching 88%. Conclusion By using both sequence and structure databases and by exploiting the latest techniques in machine learning it is possible to routinely predict protein secondary structure with an accuracy well above 80%. A program and web server, called PROTEUS, that performs these secondary structure predictions is accessible at http://wishart.biology.ualberta.ca/proteus. For high throughput or batch sequence analyses, the PROTEUS programs

  13. TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.

    Science.gov (United States)

    Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud

    2011-09-01

    Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.

  14. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Science.gov (United States)

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  15. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy.

    Science.gov (United States)

    Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A

    2013-02-01

    Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined

  16. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    International Nuclear Information System (INIS)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-01-01

    Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors’ unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight

  17. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A. [Department of Radiotherapy, Academic Medical Center, Meiberdreef 9, 1105 AZ Amsterdam (Netherlands); Bondar, L.; Zolnay, A. G.; Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2013-02-15

    Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight

  18. Improving mass measurement accuracy in mass spectrometry based proteomics by combining open source tools for chromatographic alignment and internal calibration.

    Science.gov (United States)

    Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M

    2009-05-02

    Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.

  19. Improving substructure identification accuracy of shear structures using virtual control system

    Science.gov (United States)

    Zhang, Dongyu; Yang, Yang; Wang, Tingqiang; Li, Hui

    2018-02-01

    Substructure identification is a powerful tool to identify the parameters of a complex structure. Previously, the authors developed an inductive substructure identification method for shear structures. The identification error analysis showed that the identification accuracy of this method is significantly influenced by the magnitudes of two key structural responses near a certain frequency; if these responses are unfavorable, the method cannot provide accurate estimation results. In this paper, a novel method is proposed to improve the substructure identification accuracy by introducing a virtual control system (VCS) into the structure. A virtual control system is a self-balanced system, which consists of some control devices and a set of self-balanced forces. The self-balanced forces counterbalance the forces that the control devices apply on the structure. The control devices are combined with the structure to form a controlled structure used to replace the original structure in the substructure identification; and the self-balance forces are treated as known external excitations to the controlled structure. By optimally tuning the VCS’s parameters, the dynamic characteristics of the controlled structure can be changed such that the original structural responses become more favorable for the substructure identification and, thus, the identification accuracy is improved. A numerical example of 6-story shear structure is utilized to verify the effectiveness of the VCS based controlled substructure identification method. Finally, shake table tests are conducted on a 3-story structural model to verify the efficacy of the VCS to enhance the identification accuracy of the structural parameters.

  20. Does a Structured Data Collection Form Improve The Accuracy of ...

    African Journals Online (AJOL)

    and multiple etiologies for similar presentation. Standardized forms may harmonize the initial assessment, improve accuracy of diagnosis and enhance outcomes. Objectives: To determine the extent to which use of a structured data collection form (SDCF) affected the diagnostic accuracy of AAP. Methodology: A before and ...

  1. Accuracy and impact of spatial aids based upon satellite enumeration to improve indoor residual spraying spatial coverage.

    Science.gov (United States)

    Bridges, Daniel J; Pollard, Derek; Winters, Anna M; Winters, Benjamin; Sikaala, Chadwick; Renn, Silvia; Larsen, David A

    2018-02-23

    Indoor residual spraying (IRS) is a key tool in the fight to control, eliminate and ultimately eradicate malaria. IRS protection is based on a communal effect such that an individual's protection primarily relies on the community-level coverage of IRS with limited protection being provided by household-level coverage. To ensure a communal effect is achieved through IRS, achieving high and uniform community-level coverage should be the ultimate priority of an IRS campaign. Ensuring high community-level coverage of IRS in malaria-endemic areas is challenging given the lack of information available about both the location and number of households needing IRS in any given area. A process termed 'mSpray' has been developed and implemented and involves use of satellite imagery for enumeration for planning IRS and a mobile application to guide IRS implementation. This study assessed (1) the accuracy of the satellite enumeration and (2) how various degrees of spatial aid provided through the mSpray process affected community-level IRS coverage during the 2015 spray campaign in Zambia. A 2-stage sampling process was applied to assess accuracy of satellite enumeration to determine number and location of sprayable structures. Results indicated an overall sensitivity of 94% for satellite enumeration compared to finding structures on the ground. After adjusting for structure size, roof, and wall type, households in Nchelenge District where all types of satellite-based spatial aids (paper-based maps plus use of the mobile mSpray application) were used were more likely to have received IRS than Kasama district where maps used were not based on satellite enumeration. The probability of a household being sprayed in Nchelenge district where tablet-based maps were used, did not differ statistically from that of a household in Samfya District, where detailed paper-based spatial aids based on satellite enumeration were provided. IRS coverage from the 2015 spray season benefited from

  2. Overlay accuracy fundamentals

    Science.gov (United States)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  3. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  4. Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS.

    Science.gov (United States)

    Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad

    2017-04-01

    Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  6. Improvement of Dimensional Accuracy of 3-D Printed Parts using an Additive/Subtractive Based Hybrid Prototyping Approach

    Science.gov (United States)

    Amanullah Tomal, A. N. M.; Saleh, Tanveer; Raisuddin Khan, Md.

    2017-11-01

    At present, two important processes, namely CNC machining and rapid prototyping (RP) are being used to create prototypes and functional products. Combining both additive and subtractive processes into a single platform would be advantageous. However, there are two important aspects need to be taken into consideration for this process hybridization. First is the integration of two different control systems for two processes and secondly maximizing workpiece alignment accuracy during the changeover step. Recently we have developed a new hybrid system which incorporates Fused Deposition Modelling (FDM) as RP Process and CNC grinding operation as subtractive manufacturing process into a single setup. Several objects were produced with different layer thickness for example 0.1 mm, 0.15 mm and 0.2 mm. It was observed that pure FDM method is unable to attain desired dimensional accuracy and can be improved by a considerable margin about 66% to 80%, if finishing operation by grinding is carried out. It was also observed layer thickness plays a role on the dimensional accuracy and best accuracy is achieved with the minimum layer thickness (0.1 mm).

  7. Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Gangchen Hua

    2017-01-01

    Full Text Available In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.

  8. Contrast-enhanced spectral mammography based on a photon-counting detector: quantitative accuracy and radiation dose

    Science.gov (United States)

    Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo

    2017-03-01

    Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.

  9. Can Translation Improve EFL Students' Grammatical Accuracy? [

    Directory of Open Access Journals (Sweden)

    Carol Ebbert-Hübner

    2018-01-01

    Full Text Available This report focuses on research results from a project completed at Trier University in December 2015 that provides insight into whether a monolingual group of learners can improve their grammatical accuracy and reduce interference mistakes in their English via contrastive analysis and translation instruction and activities. Contrastive analysis and translation (CAT instruction in this setting focusses on comparing grammatical differences between students’ dominant language (German and English, and practice activities where sentences or short texts are translated from German into English. The results of a pre- and post-test administered in the first and final week of a translation class were compared to two other class types: a grammar class which consisted of form-focused instruction but not translation, and a process-approach essay writing class where students received feedback on their written work throughout the semester. The results of our study indicate that with C1 level EAP students, more improvement in grammatical accuracy is seen through teaching with CAT than in explicit grammar instruction or through language feedback on written work alone. These results indicate that CAT does indeed have a place in modern language classes.

  10. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-01-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  11. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  12. Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes

    Science.gov (United States)

    Ding, Quan; Besio, Walter G.

    2015-01-01

    Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200

  13. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    Science.gov (United States)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  14. Effects of a risk-based online mammography intervention on accuracy of perceived risk and mammography intentions.

    Science.gov (United States)

    Seitz, Holli H; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M; Armstrong, Katrina; Cappella, Joseph N

    2016-10-01

    This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. 2918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (<1.5%; ≥1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) x 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk ≤1.5%. For women with a risk≤1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Science.gov (United States)

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  16. Improving the accuracy of admitted subacute clinical costing: an action research approach.

    Science.gov (United States)

    Hakkennes, Sharon; Arblaster, Ross; Lim, Kim

    2017-08-01

    Objective The aim of the present study was to determine whether action research could be used to improve the breadth and accuracy of clinical costing data in an admitted subacute setting Methods The setting was a 100-bed in-patient rehabilitation centre. Using a pre-post study design all admitted subacute separations during the 2011-12 financial year were eligible for inclusion. An action research framework aimed at improving clinical costing methodology was developed and implemented. Results In all, 1499 separations were included in the study. A medical record audit of a random selection of 80 separations demonstrated that the use of an action research framework was effective in improving the breadth and accuracy of the costing data. This was evidenced by a significant increase in the average number of activities costed, a reduction in the average number of activities incorrectly costed and a reduction in the average number of activities missing from the costing, per episode of care. Conclusions Engaging clinicians and cost centre managers was effective in facilitating the development of robust clinical costing data in an admitted subacute setting. Further investigation into the value of this approach across other care types and healthcare services is warranted. What is known about this topic? Accurate clinical costing data is essential for informing price models used in activity-based funding. In Australia, there is currently a lack of robust admitted subacute cost data to inform the price model for this care type. What does this paper add? The action research framework presented in this study was effective in improving the breadth and accuracy of clinical costing data in an admitted subacute setting. What are the implications for practitioners? To improve clinical costing practices, health services should consider engaging key stakeholders, including clinicians and cost centre managers, in reviewing clinical costing methodology. Robust clinical costing data has

  17. Improving the accuracy of dynamic mass calculation

    Directory of Open Access Journals (Sweden)

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  18. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  19. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    Nina Merkle

    2017-06-01

    Full Text Available Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high resolution synthetic aperture radar (SAR satellite TerraSAR-X exhibit an absolute geo-location accuracy within a few decimeters. These images represent therefore a reliable source to improve the geo-location accuracy of optical images, which is in the order of tens of meters. In this paper, a deep learning-based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data is investigated. Image registration between SAR and optical images requires few, but accurate and reliable matching points. These are derived from a Siamese neural network. The network is trained using TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe, in order to learn the two-dimensional spatial shifts between optical and SAR image patches. Results confirm that accurate and reliable matching points can be generated with higher matching accuracy and precision with respect to state-of-the-art approaches.

  20. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Ryan B., E-mail: randerson@astro.cornell.edu [Cornell University Department of Astronomy, 406 Space Sciences Building, Ithaca, NY 14853 (United States); Bell, James F., E-mail: Jim.Bell@asu.edu [Arizona State University School of Earth and Space Exploration, Bldg.: INTDS-A, Room: 115B, Box 871404, Tempe, AZ 85287 (United States); Wiens, Roger C., E-mail: rwiens@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States); Morris, Richard V., E-mail: richard.v.morris@nasa.gov [NASA Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO{sub 2} at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by {approx} 3 wt.%. The statistical significance of these improvements was {approx} 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and

  1. Assessing the accuracy of TDR-based water leak detection system

    Science.gov (United States)

    Fatemi Aghda, S. M.; GanjaliPour, K.; Nabiollahi, K.

    2018-03-01

    The use of TDR system to detect leakage locations in underground pipes has been developed in recent years. In this system, a bi-wire is installed in parallel with the underground pipes and is considered as a TDR sensor. This approach greatly covers the limitations arisen with using the traditional method of acoustic leak positioning. TDR based leak detection method is relatively accurate when the TDR sensor is in contact with water in just one point. Researchers have been working to improve the accuracy of this method in recent years. In this study, the ability of TDR method was evaluated in terms of the appearance of multi leakage points simultaneously. For this purpose, several laboratory tests were conducted. In these tests in order to simulate leakage points, the TDR sensor was put in contact with water at some points, then the number and the dimension of the simulated leakage points were gradually increased. The results showed that with the increase in the number and dimension of the leakage points, the error rate of the TDR-based water leak detection system increases. The authors tried, according to the results obtained from the laboratory tests, to develop a method to improve the accuracy of the TDR-based leak detection systems. To do that, they defined a few reference points on the TDR sensor. These points were created via increasing the distance between two conductors of TDR sensor and were easily identifiable in the TDR waveform. The tests were repeated again using the TDR sensor having reference points. In order to calculate the exact distance of the leakage point, the authors developed an equation in accordance to the reference points. A comparison between the results obtained from both tests (with and without reference points) showed that using the method and equation developed by the authors can significantly improve the accuracy of positioning the leakage points.

  2. Assessing the accuracy of TDR-based water leak detection system

    Directory of Open Access Journals (Sweden)

    S.M. Fatemi Aghda

    2018-03-01

    Full Text Available The use of TDR system to detect leakage locations in underground pipes has been developed in recent years. In this system, a bi-wire is installed in parallel with the underground pipes and is considered as a TDR sensor. This approach greatly covers the limitations arisen with using the traditional method of acoustic leak positioning. TDR based leak detection method is relatively accurate when the TDR sensor is in contact with water in just one point. Researchers have been working to improve the accuracy of this method in recent years.In this study, the ability of TDR method was evaluated in terms of the appearance of multi leakage points simultaneously. For this purpose, several laboratory tests were conducted. In these tests in order to simulate leakage points, the TDR sensor was put in contact with water at some points, then the number and the dimension of the simulated leakage points were gradually increased. The results showed that with the increase in the number and dimension of the leakage points, the error rate of the TDR-based water leak detection system increases.The authors tried, according to the results obtained from the laboratory tests, to develop a method to improve the accuracy of the TDR-based leak detection systems. To do that, they defined a few reference points on the TDR sensor. These points were created via increasing the distance between two conductors of TDR sensor and were easily identifiable in the TDR waveform. The tests were repeated again using the TDR sensor having reference points. In order to calculate the exact distance of the leakage point, the authors developed an equation in accordance to the reference points. A comparison between the results obtained from both tests (with and without reference points showed that using the method and equation developed by the authors can significantly improve the accuracy of positioning the leakage points. Keywords: Multiple leakage points, TDR, Reference points

  3. IMPROVED MOTOR-TIMING: EFFECTS OF SYNCHRONIZED METRO-NOME TRAINING ON GOLF SHOT ACCURACY

    Directory of Open Access Journals (Sweden)

    Louise Rönnqvist

    2009-12-01

    Full Text Available This study investigates the effect of synchronized metronome training (SMT on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6 in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study's results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy

  4. Improving Odometric Accuracy for an Autonomous Electric Cart.

    Science.gov (United States)

    Toledo, Jonay; Piñeiro, Jose D; Arnay, Rafael; Acosta, Daniel; Acosta, Leopoldo

    2018-01-12

    In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  5. An index with improved diagnostic accuracy for the diagnosis of Crohn's disease derived from the Lennard-Jones criteria.

    Science.gov (United States)

    Reinisch, S; Schweiger, K; Pablik, E; Collet-Fenetrier, B; Peyrin-Biroulet, L; Alfaro, I; Panés, J; Moayyedi, P; Reinisch, W

    2016-09-01

    The Lennard-Jones criteria are considered the gold standard for diagnosing Crohn's disease (CD) and include the items granuloma, macroscopic discontinuity, transmural inflammation, fibrosis, lymphoid aggregates and discontinuous inflammation on histology. The criteria have never been subjected to a formal validation process. To develop a validated and improved diagnostic index based on the items of Lennard-Jones criteria. Included were 328 adult patients with long-standing CD (median disease duration 10 years) from three centres and classified as 'established', 'probable' or 'non-CD' by Lennard-Jones criteria at time of diagnosis. Controls were patients with ulcerative colitis (n = 170). The performance of each of the six diagnostic items of Lennard-Jones criteria was modelled by logistic regression and a new index based on stepwise backward selection and cut-offs was developed. The diagnostic value of the new index was analysed by comparing sensitivity, specificity and accuracy vs. Lennard-Jones criteria. By Lennard-Jones criteria 49% (n = 162) of CD patients would have been diagnosed as 'non-CD' at time of diagnosis (sensitivity/specificity/accuracy, 'established' CD: 0.34/0.99/0.67; 'probable' CD: 0.51/0.95/0.73). A new index was derived from granuloma, fibrosis, transmural inflammation and macroscopic discontinuity, but excluded lymphoid aggregates and discontinuous inflammation on histology. Our index provided improved diagnostic accuracy for 'established' and 'probable' CD (sensitivity/specificity/accuracy, 'established' CD: 0.45/1/0.72; 'probable' CD: 0.8/0.85/0.82), including the subgroup isolated colonic CD ('probable' CD, new index: 0.73/0.85/0.79; Lennard-Jones criteria: 0.43/0.95/0.69). We developed an index based on items of Lennard-Jones criteria providing improved diagnostic accuracy for the differential diagnosis between CD and UC. © 2016 John Wiley & Sons Ltd.

  6. Accuracy Improvement of Boron Meter Adopting New Fitting Function and Multi-Detector

    Directory of Open Access Journals (Sweden)

    Chidong Kong

    2016-12-01

    Full Text Available This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  7. Accuracy improvement of boron meter adopting new fitting function and multi-detector

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Chidong; Lee, Hyun Suk; Tak, Tae Woo; Lee, Deok Jung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); KIm, Si Hwan; Lyou, Seok Jean [Users Incorporated Company, Hansin S-MECA, Daejeon (Korea, Republic of)

    2016-12-15

    This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs) in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  8. A model to improve the accuracy of US Poison Center data collection.

    Science.gov (United States)

    Krenzelok, E P; Reynolds, K M; Dart, R C; Green, J L

    2014-01-01

    Over 2 million human exposure calls are reported annually to United States regional poison information centers. All exposures are documented electronically and submitted to the American Association of Poison Control Center's National Poison Data System. This database represents the largest data source available on the epidemiology of pharmaceutical and non-pharmaceutical poisoning exposures. The accuracy of these data is critical; however, research has demonstrated that inconsistencies and inaccuracies exist. This study outlines the methods and results of a training program that was developed and implemented to enhance the quality of data collection using acetaminophen exposures as a model. Eleven poison centers were assigned randomly to receive either passive or interactive education to improve medical record documentation. A task force provided recommendations on educational and training strategies and the development of a quality-measurement scorecard to serve as a data collection tool to assess poison center data quality. Poison centers were recruited to participate in the study. Clinical researchers scored the documentation of each exposure record for accuracy. Results. Two thousand two hundred cases were reviewed and assessed for accuracy of data collection. After training, the overall mean quality scores were higher for both the passive (95.3%; + 1.6% change) and interactive intervention groups (95.3%; + 0.9% change). Data collection accuracy improved modestly for the overall accuracy score and significantly for the substance identification component. There was little difference in accuracy measures between the different training methods. Despite the diversity of poison centers, data accuracy, specifically substance identification data fields, can be improved by developing a standardized, systematic, targeted, and mandatory training process. This process should be considered for training on other important topics, thus enhancing the value of these data in

  9. Three-dimensional display improves observer speed and accuracy

    International Nuclear Information System (INIS)

    Nelson, J.A.; Rowberg, A.H.; Kuyper, S.; Choi, H.S.

    1989-01-01

    In an effort to evaluate the potential cost-effectiveness of three-dimensional (3D) display equipment, we compared the speed and accuracy of experienced radiologists identifying in sliced uppercase letters from CT scans with 2D and pseudo-3D display. CT scans of six capital letters were obtained and printed as a 2D display or as a synthesized pseudo-3D display (Pixar). Six observes performed a timed identification task. Radiologists read the 3D display an average of 16 times faster than the 2D, and the average error rate of 2/6 (± 0.6/6) for 2D interpretations was totally eliminated. This degree of improvement in speed and accuracy suggests that the expense of 3D display may be cost-effective in a defined clinical setting

  10. Improving Odometric Accuracy for an Autonomous Electric Cart

    Directory of Open Access Journals (Sweden)

    Jonay Toledo

    2018-01-01

    Full Text Available In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  11. Evaluation of accuracy of B-spline transformation-based deformable image registration with different parameter settings for thoracic images

    International Nuclear Information System (INIS)

    Kanai, Takayuki; Kadoya, Noriyuki; Ito, Kengo

    2014-01-01

    Deformable image registration (DIR) is fundamental technique for adaptive radiotherapy and image-guided radiotherapy. However, further improvement of DIR is still needed. We evaluated the accuracy of B-spline transformation-based DIR implemented in elastix. This registration package is largely based on the Insight Segmentation and Registration Toolkit (ITK), and several new functions were implemented to achieve high DIR accuracy. The purpose of this study was to clarify whether new functions implemented in elastix are useful for improving DIR accuracy. Thoracic 4D computed tomography images of ten patients with esophageal or lung cancer were studied. Datasets for these patients were provided by DIR-lab (dir-lab.com) and included a coordinate list of anatomical landmarks that had been manually identified. DIR between peak-inhale and peak-exhale images was performed with four types of parameter settings. The first one represents original ITK (Parameter 1). The second employs the new function of elastix (Parameter 2), and the third was created to verify whether new functions improve DIR accuracy while keeping computational time (Parameter 3). The last one partially employs a new function (Parameter 4). Registration errors for these parameter settings were calculated using the manually determined landmark pairs. 3D registration errors with standard deviation over all cases were 1.78 (1.57), 1.28 (1.10), 1.44 (1.09) and 1.36 (1.35) mm for Parameter 1, 2, 3 and 4, respectively, indicating that the new functions are useful for improving DIR accuracy, even while maintaining the computational time, and this B-spline-based DIR could be used clinically to achieve high-accuracy adaptive radiotherapy. (author)

  12. Accuracy improvement of irradiation data by combining ground and satellite measurements

    Energy Technology Data Exchange (ETDEWEB)

    Betcke, J. [Energy and Semiconductor Research Laboratory, Carl von Ossietzky University, Oldenburg (Germany); Beyer, H.G. [Department of Electrical Engineering, University of Applied Science (F.H.) Magdeburg-Stendal, Magdeburg (Germany)

    2004-07-01

    Accurate and site-specific irradiation data are essential input for optimal planning, monitoring and operation of solar energy technologies. A concrete example is the performance check of grid connected PV systems with the PVSAT-2 procedure. This procedure detects system faults in an early stage by a daily comparison of an individual reference yield with the actual yield. Calculation of the reference yield requires hourly irradiation data with a known accuracy. A field test of the predecessing PVSAT-1 procedure showed that the accuracy of the irradiation input is the determining factor for the overall accuracy of the yield calculation. In this paper we will investigate if it is possible to improve the accuracy of sitespeci.c irradiation data by combining accurate localised pyranometer data with semi-continuous satellite data.We will therefore introduce the ''Kriging of Differences'' data fusion method. Kriging of Differences also offers the possibility to estimate it's own accuracy. The obtainable accuracy gain and the effectiveness of the accuracy prediction will be investigated by validation on monthly and daily irradiation datasets. Results will be compared with the Heliosat method and interpolation of ground data. (orig.)

  13. How patients can improve the accuracy of their medical records.

    Science.gov (United States)

    Dullabh, Prashila M; Sondheimer, Norman K; Katsh, Ethan; Evans, Michael A

    2014-01-01

    Assess (1) if patients can improve their medical records' accuracy if effectively engaged using a networked Personal Health Record; (2) workflow efficiency and reliability for receiving and processing patient feedback; and (3) patient feedback's impact on medical record accuracy. Improving medical record' accuracy and associated challenges have been documented extensively. Providing patients with useful access to their records through information technology gives them new opportunities to improve their records' accuracy and completeness. A new approach supporting online contributions to their medication lists by patients of Geisinger Health Systems, an online patient-engagement advocate, revealed this can be done successfully. In late 2011, Geisinger launched an online process for patients to provide electronic feedback on their medication lists' accuracy before a doctor visit. Patient feedback was routed to a Geisinger pharmacist, who reviewed it and followed up with the patient before changing the medication list shared by the patient and the clinicians. The evaluation employed mixed methods and consisted of patient focus groups (users, nonusers, and partial users of the feedback form), semi structured interviews with providers and pharmacists, user observations with patients, and quantitative analysis of patient feedback data and pharmacists' medication reconciliation logs. (1) Patients were eager to provide feedback on their medications and saw numerous advantages. Thirty percent of patient feedback forms (457 of 1,500) were completed and submitted to Geisinger. Patients requested changes to the shared medication lists in 89 percent of cases (369 of 414 forms). These included frequency-or dosage changes to existing prescriptions and requests for new medications (prescriptions and over-the counter). (2) Patients provided useful and accurate online feedback. In a subsample of 107 forms, pharmacists responded positively to 68 percent of patient requests for

  14. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  15. Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints

    Directory of Open Access Journals (Sweden)

    Naif M. Alsubaie

    2017-09-01

    Full Text Available This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS. Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers. These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS, accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution.

  16. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    International Nuclear Information System (INIS)

    Wang, Yongbo; Wu, Huapeng; Handroos, Heikki

    2013-01-01

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device

  17. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  18. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Science.gov (United States)

    Zhu, Xiangbin; Qiu, Huiling

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  19. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Directory of Open Access Journals (Sweden)

    Xiangbin Zhu

    Full Text Available Human activity recognition(HAR from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  20. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.; Motwalli, Olaa Amin; Oliva, Romina; Jankovic, Boris R.; Medvedeva, Yulia; Ashoor, Haitham; Essack, Magbubah; Gao, Xin; Bajic, Vladimir B.

    2018-01-01

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  1. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.

    2018-03-20

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  2. Cost-effective improvements of a rotating platform by integration of a high-accuracy inclinometer and encoders for attitude evaluation

    International Nuclear Information System (INIS)

    Wen, Chenyang; He, Shengyang; Hu, Peida; Bu, Changgen

    2017-01-01

    Attitude heading reference systems (AHRSs) based on micro-electromechanical system (MEMS) inertial sensors are widely used because of their low cost, light weight, and low power. However, low-cost AHRSs suffer from large inertial sensor errors. Therefore, experimental performance evaluation of MEMS-based AHRSs after system implementation is necessary. High-accuracy turntables can be used to verify the performance of MEMS-based AHRSs indoors, but they are expensive and unsuitable for outdoor tests. This study developed a low-cost two-axis rotating platform for indoor and outdoor attitude determination. A high-accuracy inclinometer and encoders were integrated into the platform to improve the achievable attitude test accuracy. An attitude error compensation method was proposed to calibrate the initial attitude errors caused by the movements and misalignment angles of the platform. The proposed attitude error determination method was examined through rotating experiments, which showed that the standard deviations of the pitch and roll errors were 0.050° and 0.090°, respectively. The pitch and roll errors both decreased to 0.024° when the proposed attitude error determination method was used. This decrease validates the effectiveness of the compensation method. Experimental results demonstrated that the integration of the inclinometer and encoders improved the performance of the low-cost, two-axis, rotating platform in terms of attitude accuracy. (paper)

  3. Particle Filter-Based Target Tracking Algorithm for Magnetic Resonance-Guided Respiratory Compensation : Robustness and Accuracy Assessment

    NARCIS (Netherlands)

    Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N

    PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized

  4. Evaluation of accuracy of B-spline transformation-based deformable image registration with different parameter settings for thoracic images.

    Science.gov (United States)

    Kanai, Takayuki; Kadoya, Noriyuki; Ito, Kengo; Onozato, Yusuke; Cho, Sang Yong; Kishi, Kazuma; Dobashi, Suguru; Umezawa, Rei; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi

    2014-11-01

    Deformable image registration (DIR) is fundamental technique for adaptive radiotherapy and image-guided radiotherapy. However, further improvement of DIR is still needed. We evaluated the accuracy of B-spline transformation-based DIR implemented in elastix. This registration package is largely based on the Insight Segmentation and Registration Toolkit (ITK), and several new functions were implemented to achieve high DIR accuracy. The purpose of this study was to clarify whether new functions implemented in elastix are useful for improving DIR accuracy. Thoracic 4D computed tomography images of ten patients with esophageal or lung cancer were studied. Datasets for these patients were provided by DIR-lab (dir-lab.com) and included a coordinate list of anatomical landmarks that had been manually identified. DIR between peak-inhale and peak-exhale images was performed with four types of parameter settings. The first one represents original ITK (Parameter 1). The second employs the new function of elastix (Parameter 2), and the third was created to verify whether new functions improve DIR accuracy while keeping computational time (Parameter 3). The last one partially employs a new function (Parameter 4). Registration errors for these parameter settings were calculated using the manually determined landmark pairs. 3D registration errors with standard deviation over all cases were 1.78 (1.57), 1.28 (1.10), 1.44 (1.09) and 1.36 (1.35) mm for Parameter 1, 2, 3 and 4, respectively, indicating that the new functions are useful for improving DIR accuracy, even while maintaining the computational time, and this B-spline-based DIR could be used clinically to achieve high-accuracy adaptive radiotherapy. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  5. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    International Nuclear Information System (INIS)

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki; Ota, Keishin; Matsuda, Tsuyoshi; Tonomura, Akira

    2012-01-01

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: ► A modified phase-shifting electron holography was proposed. ► The time variation of mean intensity and contrast of holograms were corrected. ► These corrections lead to a great improvement of the resultant phase accuracy. ► A phase accuracy of about 1/4000 rad was achieved from experimental results.

  6. Prediction of lung tumour position based on spirometry and on abdominal displacement: Accuracy and reproducibility

    International Nuclear Information System (INIS)

    Hoisak, Jeremy D.P.; Sixel, Katharina E.; Tirona, Romeo; Cheung, Patrick C.F.; Pignol, Jean-Philippe

    2006-01-01

    Background and purpose: A simulation investigating the accuracy and reproducibility of a tumour motion prediction model over clinical time frames is presented. The model is formed from surrogate and tumour motion measurements, and used to predict the future position of the tumour from surrogate measurements alone. Patients and methods: Data were acquired from five non-small cell lung cancer patients, on 3 days. Measurements of respiratory volume by spirometry and abdominal displacement by a real-time position tracking system were acquired simultaneously with X-ray fluoroscopy measurements of superior-inferior tumour displacement. A model of tumour motion was established and used to predict future tumour position, based on surrogate input data. The calculated position was compared against true tumour motion as seen on fluoroscopy. Three different imaging strategies, pre-treatment, pre-fraction and intrafractional imaging, were employed in establishing the fitting parameters of the prediction model. The impact of each imaging strategy upon accuracy and reproducibility was quantified. Results: When establishing the predictive model using pre-treatment imaging, four of five patients exhibited poor interfractional reproducibility for either surrogate in subsequent sessions. Simulating the formulation of the predictive model prior to each fraction resulted in improved interfractional reproducibility. The accuracy of the prediction model was only improved in one of five patients when intrafractional imaging was used. Conclusions: Employing a prediction model established from measurements acquired at planning resulted in localization errors. Pre-fractional imaging improved the accuracy and reproducibility of the prediction model. Intrafractional imaging was of less value, suggesting that the accuracy limit of a surrogate-based prediction model is reached with once-daily imaging

  7. Accuracy of depolarization and delay spread predictions using advanced ray-based modeling in indoor scenarios

    Directory of Open Access Journals (Sweden)

    Mani Francesco

    2011-01-01

    Full Text Available Abstract This article investigates the prediction accuracy of an advanced deterministic propagation model in terms of channel depolarization and frequency selectivity for indoor wireless propagation. In addition to specular reflection and diffraction, the developed ray tracing tool considers penetration through dielectric blocks and/or diffuse scattering mechanisms. The sensitivity and prediction accuracy analysis is based on two measurement campaigns carried out in a warehouse and an office building. It is shown that the implementation of diffuse scattering into RT significantly increases the accuracy of the cross-polar discrimination prediction, whereas the delay-spread prediction is only marginally improved.

  8. The Improvement of Behavior Recognition Accuracy of Micro Inertial Accelerometer by Secondary Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2014-05-01

    Full Text Available Behaviors of “still”, “walking”, “running”, “jumping”, “upstairs” and “downstairs” can be recognized by micro inertial accelerometer of low cost. By using the features as inputs to the well-trained BP artificial neural network which is selected as classifier, those behaviors can be recognized. But the experimental results show that the recognition accuracy is not satisfactory. This paper presents secondary recognition algorithm and combine it with BP artificial neural network to improving the recognition accuracy. The Algorithm is verified by the Android mobile platform, and the recognition accuracy can be improved more than 8 %. Through extensive testing statistic analysis, the recognition accuracy can reach 95 % through BP artificial neural network and the secondary recognition, which is a reasonable good result from practical point of view.

  9. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Ota, Keishin, E-mail: ota@microphase.co.jp [Microphase Co., Ltd., Onigakubo 1147-9, Tsukuba, Ibaragi 300-2651 (Japan); Matsuda, Tsuyoshi [Japan Science and Technology Agency, Kawaguchi-shi, Saitama 332-0012 (Japan); Tonomura, Akira [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Okinawa Institute of Science and Technology, Graduate University, Kunigami, Okinawa 904-0495 (Japan); Central Research Laboratory, Hitachi, Ltd., Hatoyama, Saitama 350-0395 (Japan)

    2012-07-15

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: Black-Right-Pointing-Pointer A modified phase-shifting electron holography was proposed. Black-Right-Pointing-Pointer The time variation of mean intensity and contrast of holograms were corrected. Black-Right-Pointing-Pointer These corrections lead to a great improvement of the resultant phase accuracy. Black-Right-Pointing-Pointer A phase accuracy of about 1/4000 rad was achieved from experimental results.

  10. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    Science.gov (United States)

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples

  11. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    International Nuclear Information System (INIS)

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  12. Improvement on the accuracy of beam bugs in linear induction accelerator

    International Nuclear Information System (INIS)

    Xie Yutong; Dai Zhiyong; Han Qing

    2002-01-01

    In linear induction accelerator the resistive wall monitors known as 'beam bugs' have been used as essential diagnostics of beam current and location. The author presents a new method that can improve the accuracy of these beam bugs used for beam position measurements. With a fine beam simulation set, this method locates the beam position with an accuracy of 0.02 mm and thus can scale the beam bugs very well. Experiment results prove that the precision of beam position measurements can reach submillimeter degree

  13. Training readers to improve their accuracy in grading Crohn's disease activity on MRI

    International Nuclear Information System (INIS)

    Tielbeek, Jeroen A.W.; Bipat, Shandra; Boellaard, Thierry N.; Nio, C.Y.; Stoker, Jaap

    2014-01-01

    To prospectively evaluate if training with direct feedback improves grading accuracy of inexperienced readers for Crohn's disease activity on magnetic resonance imaging (MRI). Thirty-one inexperienced readers assessed 25 cases as a baseline set. Subsequently, all readers received training and assessed 100 cases with direct feedback per case, randomly assigned to four sets of 25 cases. The cases in set 4 were identical to the baseline set. Grading accuracy, understaging, overstaging, mean reading times and confidence scores (scale 0-10) were compared between baseline and set 4, and between the four consecutive sets with feedback. Proportions of grading accuracy, understaging and overstaging per set were compared using logistic regression analyses. Mean reading times and confidence scores were compared by t-tests. Grading accuracy increased from 66 % (95 % CI, 56-74 %) at baseline to 75 % (95 % CI, 66-81 %) in set 4 (P = 0.003). Understaging decreased from 15 % (95 % CI, 9-23 %) to 7 % (95 % CI, 3-14 %) (P < 0.001). Overstaging did not change significantly (20 % vs 19 %). Mean reading time decreased from 6 min 37 s to 4 min 35 s (P < 0.001). Mean confidence increased from 6.90 to 7.65 (P < 0.001). During training, overall grading accuracy, understaging, mean reading times and confidence scores improved gradually. Inexperienced readers need training with at least 100 cases to achieve the literature reported grading accuracy of 75 %. (orig.)

  14. Improving Accuracy of Intrusion Detection Model Using PCA and optimized SVM

    Directory of Open Access Journals (Sweden)

    Sumaiya Thaseen Ikram

    2016-06-01

    Full Text Available Intrusion detection is very essential for providing security to different network domains and is mostly used for locating and tracing the intruders. There are many problems with traditional intrusion detection models (IDS such as low detection capability against unknown network attack, high false alarm rate and insufficient analysis capability. Hence the major scope of the research in this domain is to develop an intrusion detection model with improved accuracy and reduced training time. This paper proposes a hybrid intrusiondetection model by integrating the principal component analysis (PCA and support vector machine (SVM. The novelty of the paper is the optimization of kernel parameters of the SVM classifier using automatic parameter selection technique. This technique optimizes the punishment factor (C and kernel parameter gamma (γ, thereby improving the accuracy of the classifier and reducing the training and testing time. The experimental results obtained on the NSL KDD and gurekddcup dataset show that the proposed technique performs better with higher accuracy, faster convergence speed and better generalization. Minimum resources are consumed as the classifier input requires reduced feature set for optimum classification. A comparative analysis of hybrid models with the proposed model is also performed.

  15. Analysis of RDSS positioning accuracy based on RNSS wide area differential technique

    Science.gov (United States)

    Xing, Nan; Su, RanRan; Zhou, JianHua; Hu, XiaoGong; Gong, XiuQiang; Liu, Li; He, Feng; Guo, Rui; Ren, Hui; Hu, GuangMing; Zhang, Lei

    2013-10-01

    The BeiDou Navigation Satellite System (BDS) provides Radio Navigation Service System (RNSS) as well as Radio Determination Service System (RDSS). RDSS users can obtain positioning by responding the Master Control Center (MCC) inquiries to signal transmitted via GEO satellite transponder. The positioning result can be calculated with elevation constraint by MCC. The primary error sources affecting the RDSS positioning accuracy are the RDSS signal transceiver delay, atmospheric trans-mission delay and GEO satellite position error. During GEO orbit maneuver, poor orbit forecast accuracy significantly impacts RDSS services. A real-time 3-D orbital correction method based on wide-area differential technique is raised to correct the orbital error. Results from the observation shows that the method can successfully improve positioning precision during orbital maneuver, independent from the RDSS reference station. This improvement can reach 50% in maximum. Accurate calibration of the RDSS signal transceiver delay precision and digital elevation map may have a critical role in high precise RDSS positioning services.

  16. Improving accuracy and capabilities of X-ray fluorescence method using intensity ratios

    Energy Technology Data Exchange (ETDEWEB)

    Garmay, Andrey V., E-mail: andrew-garmay@yandex.ru; Oskolok, Kirill V.

    2017-04-15

    An X-ray fluorescence analysis algorithm is proposed which is based on a use of ratios of X-ray fluorescence lines intensities. Such an analytical signal is more stable and leads to improved accuracy. Novel calibration equations are proposed which are suitable for analysis in a broad range of matrix compositions. To apply the algorithm to analysis of samples containing significant amount of undetectable elements a use of a dependence of a Rayleigh-to-Compton intensity ratio on a total content of these elements is suggested. The technique's validity is shown by analysis of standard steel samples, model metal oxides mixture and iron ore samples.

  17. Contrast-enhanced spectral mammography improves diagnostic accuracy in the symptomatic setting.

    Science.gov (United States)

    Tennant, S L; James, J J; Cornford, E J; Chen, Y; Burrell, H C; Hamilton, L J; Girio-Fragkoulakis, C

    2016-11-01

    To assess the diagnostic accuracy of contrast-enhanced spectral mammography (CESM), and gauge its "added value" in the symptomatic setting. A retrospective multi-reader review of 100 consecutive CESM examinations was performed. Anonymised low-energy (LE) images were reviewed and given a score for malignancy. At least 3 weeks later, the entire examination (LE and recombined images) was reviewed. Histopathology data were obtained for all cases. Differences in performance were assessed using receiver operator characteristic (ROC) analysis. Sensitivity, specificity, and lesion size (versus MRI or histopathology) differences were calculated. Seventy-three percent of cases were malignant at final histology, 27% were benign following standard triple assessment. ROC analysis showed improved overall performance of CESM over LE alone, with area under the curve of 0.93 versus 0.83 (p<0.025). CESM showed increased sensitivity (95% versus 84%, p<0.025) and specificity (81% versus 63%, p<0.025) compared to LE alone, with all five readers showing improved accuracy. Tumour size estimation at CESM was significantly more accurate than LE alone, the latter tending to undersize lesions. In 75% of cases, CESM was deemed a useful or significant aid to diagnosis. CESM provides immediately available, clinically useful information in the symptomatic clinic in patients with suspicious palpable abnormalities. Radiologist sensitivity, specificity, and size accuracy for breast cancer detection and staging are all improved using CESM as the primary mammographic investigation. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  18. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    Science.gov (United States)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  19. Improvement of Diagnostic Accuracy by Standardization in Diuretic Renal Scan

    International Nuclear Information System (INIS)

    Hyun, In Young; Lee, Dong Soo; Lee, Kyung Han; Chung, June Key; Lee, Myung Chul; Koh, Chang Soon; Kim, Kwang Myung; Choi, Hwang; Choi, Yong

    1995-01-01

    We evaluated diagnostic accuracy of diuretic renal scan with standardization in 45 children(107 hydronephrotic kidneys) with 91 diuretic assessments. Sensitivity was 100% specificity was 78%, and accuracy was 84% in 49 hydronephrotic kidneys with standardization. Diuretic renal scan without standardization, sensitivity was 100%, specificity was 38%, and accuracy was 57% in 58 hydronephrotic kidneys. The false-positive results were observed in 25 cases without standardization, and in 8 cases with standardization. In duretic renal scans without standardization, the causes of false-positive results were 10 early injection of lasix before mixing of radioactivity in loplsty, 6 extrarenal pelvis, and 3 immature kidneys of false-positive results were 2 markedly dilated systems postpyeloplsty, 2 etrarenal pevis, 1 immature kidney of neonate , and 2 severe renal dysfunction, 1 vesicoureteral, reflux. In diuretic renal scan without standardization the false-positive results by inadequate study were common, but false-positive results by inadequate study were not found after standardization. The false-positive results by dilated pelvo-calyceal systems postpyeloplsty, extrarenal pelvis, and immature kidneys of, neonates were not dissolved after standardization. In conclusion, diagnostic accuracy of diuretic renal scan with standardization was useful in children with renal outflow tract obstruction by improving specificity significantly.

  20. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    Science.gov (United States)

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  1. How social information can improve estimation accuracy in human groups.

    Science.gov (United States)

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  2. Effectiveness of blood pressure educational and evaluation program for the improvement of measurement accuracy among nurses.

    Science.gov (United States)

    Rabbia, Franco; Testa, Elisa; Rabbia, Silvia; Praticò, Santina; Colasanto, Claudia; Montersino, Federica; Berra, Elena; Covella, Michele; Fulcheri, Chiara; Di Monaco, Silvia; Buffolo, Fabrizio; Totaro, Silvia; Veglio, Franco

    2013-06-01

    To assess the procedure for measuring blood pressure (BP) among hospital nurses and to assess if a training program would improve technique and accuracy. 160 nurses from Molinette Hospital were included in the study. The program was based upon theoretical and practical lessons. It was one day long and it was held by trained nurses and physicians who have practice in the Hypertension Unit. An evaluation of nurses' measuring technique and accuracy was performed before and after the program, by using a 9-item checklist. Moreover we calculated the differences between measured and effective BP values before and after the training program. At baseline evaluation, we observed inadequate performance on some points of clinical BP measurement technique, specifically: only 10% of nurses inspected the arm diameter before placing the cuff, 4% measured BP in both arms, 80% placed the head of the stethoscope under the cuff, 43% did not remove all clothing that covered the location of cuff placement, did not have the patient seat comfortably with his legs uncrossed and with his back and arms supported. After the training we found a significant improvement in the technique for all items. We didn't observe any significant difference of measurement knowledge between nurses working in different settings such as medical or surgical departments. Periodical education in BP measurement may be required, and this may significantly improve the technique and consequently the accuracy.

  3. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    Directory of Open Access Journals (Sweden)

    Xiaoli Ding

    2009-02-01

    Full Text Available Interferometric Synthetic Aperture Radar (InSAR is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram.

  4. Improving forecasting accuracy of medium and long-term runoff using artificial neural network based on EEMD decomposition.

    Science.gov (United States)

    Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo

    2015-05-01

    Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Evaluation of scanning 2D barcoded vaccines to improve data accuracy of vaccines administered.

    Science.gov (United States)

    Daily, Ashley; Kennedy, Erin D; Fierro, Leslie A; Reed, Jenica Huddleston; Greene, Michael; Williams, Warren W; Evanson, Heather V; Cox, Regina; Koeppl, Patrick; Gerlach, Ken

    2016-11-11

    Accurately recording vaccine lot number, expiration date, and product identifiers, in patient records is an important step in improving supply chain management and patient safety in the event of a recall. These data are being encoded on two-dimensional (2D) barcodes on most vaccine vials and syringes. Using electronic vaccine administration records, we evaluated the accuracy of lot number and expiration date entered using 2D barcode scanning compared to traditional manual or drop-down list entry methods. We analyzed 128,573 electronic records of vaccines administered at 32 facilities. We compared the accuracy of records entered using 2D barcode scanning with those entered using traditional methods using chi-square tests and multilevel logistic regression. When 2D barcodes were scanned, lot number data accuracy was 1.8 percentage points higher (94.3-96.1%, Pmanufacturer, month vaccine was administered, and vaccine type were associated with variation in accuracy for both lot number and expiration date. Two-dimensional barcode scanning shows promise for improving data accuracy of vaccine lot number and expiration date records. Adapting systems to further integrate with 2D barcoding could help increase adoption of 2D barcode scanning technology. Published by Elsevier Ltd.

  6. Algorithms and parameters for improved accuracy in physics data libraries

    International Nuclear Information System (INIS)

    Batič, M; Hoff, G; Pia, M G; Saracco, P; Han, M; Kim, C H; Hauf, S; Kuster, M; Seo, H

    2012-01-01

    Recent efforts for the improvement of the accuracy of physics data libraries used in particle transport are summarized. Results are reported about a large scale validation analysis of atomic parameters used by major Monte Carlo systems (Geant4, EGS, MCNP, Penelope etc.); their contribution to the accuracy of simulation observables is documented. The results of this study motivated the development of a new atomic data management software package, which optimizes the provision of state-of-the-art atomic parameters to physics models. The effect of atomic parameters on the simulation of radioactive decay is illustrated. Ideas and methods to deal with physics models applicable to different energy ranges in the production of data libraries, rather than at runtime, are discussed.

  7. An NMR-based scoring function improves the accuracy of binding pose predictions by docking by two orders of magnitude

    Energy Technology Data Exchange (ETDEWEB)

    Orts, Julien [EMBL, Structure and Computational Biology Unit (Germany); Bartoschek, Stefan [Industriepark Hoechst, Sanofi-Aventis Deutschland GmbH, R and D LGCR/Parallel Synthesis and Natural Products (Germany); Griesinger, Christian [Max Planck Institute for Biophysical Chemistry (Germany); Monecke, Peter [Industriepark Hoechst, Sanofi-Aventis Deutschland GmbH, R and D LGCR/Structure, Design and Informatics (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@embl.de [EMBL, Structure and Computational Biology Unit (Germany)

    2012-01-15

    Low-affinity ligands can be efficiently optimized into high-affinity drug leads by structure based drug design when atomic-resolution structural information on the protein/ligand complexes is available. In this work we show that the use of a few, easily obtainable, experimental restraints improves the accuracy of the docking experiments by two orders of magnitude. The experimental data are measured in nuclear magnetic resonance spectra and consist of protein-mediated NOEs between two competitively binding ligands. The methodology can be widely applied as the data are readily obtained for low-affinity ligands in the presence of non-labelled receptor at low concentration. The experimental inter-ligand NOEs are efficiently used to filter and rank complex model structures that have been pre-selected by docking protocols. This approach dramatically reduces the degeneracy and inaccuracy of the chosen model in docking experiments, is robust with respect to inaccuracy of the structural model used to represent the free receptor and is suitable for high-throughput docking campaigns.

  8. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Peng Lu

    2018-01-01

    Full Text Available Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively.

  9. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  10. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh

    2013-01-01

    triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...... in the context of travel time estimation by BT has been considered by various researchers. However, treatment of this issue has remained simplistic so far. Most previous studies simply used the first detection event (Enter-Enter) as the best estimate. No systematic analysis for exploring the most accurate method...... of estimating travel time using multiple detection events has been conducted. In this study different aspects of BT detection zone, including size and its impact on the accuracy of travel time estimation, are discussed. Moreover, four alternative methods are applied; namely, Enter-Enter, Leave-Leave, Peak...

  11. Investigation into the accuracy of a proposed laser diode based multilateration machine tool calibration system

    International Nuclear Information System (INIS)

    Fletcher, S; Longstaff, A P; Myers, A

    2005-01-01

    Geometric and thermal calibration of CNC machine tools is required in modern machine shops with volumetric accuracy assessment becoming the standard machine tool qualification in many industries. Laser interferometry is a popular method of measuring the errors but this, and other alternatives, tend to be expensive, time consuming or both. This paper investigates the feasibility of using a laser diode based system that capitalises on the low cost nature of the diode to provide multiple laser sources for fast error measurement using multilateration. Laser diode module technology enables improved wavelength stability and spectral linewidth which are important factors for laser interferometry. With more than three laser sources, the set-up process can be greatly simplified while providing flexibility in the location of the laser sources improving the accuracy of the system

  12. Using spectrotemporal indices to improve the fruit-tree crop classification accuracy

    Science.gov (United States)

    Peña, M. A.; Liao, R.; Brenning, A.

    2017-06-01

    This study assesses the potential of spectrotemporal indices derived from satellite image time series (SITS) to improve the classification accuracy of fruit-tree crops. Six major fruit-tree crop types in the Aconcagua Valley, Chile, were classified by applying various linear discriminant analysis (LDA) techniques on a Landsat-8 time series of nine images corresponding to the 2014-15 growing season. As features we not only used the complete spectral resolution of the SITS, but also all possible normalized difference indices (NDIs) that can be constructed from any two bands of the time series, a novel approach to derive features from SITS. Due to the high dimensionality of this "enhanced" feature set we used the lasso and ridge penalized variants of LDA (PLDA). Although classification accuracies yielded by the standard LDA applied on the full-band SITS were good (misclassification error rate, MER = 0.13), they were further improved by 23% (MER = 0.10) with ridge PLDA using the enhanced feature set. The most important bands to discriminate the crops of interest were mainly concentrated on the first two image dates of the time series, corresponding to the crops' greenup stage. Despite the high predictor weights provided by the red and near infrared bands, typically used to construct greenness spectral indices, other spectral regions were also found important for the discrimination, such as the shortwave infrared band at 2.11-2.19 μm, sensitive to foliar water changes. These findings support the usefulness of spectrotemporal indices in the context of SITS-based crop type classifications, which until now have been mainly constructed by the arithmetic combination of two bands of the same image date in order to derive greenness temporal profiles like those from the normalized difference vegetation index.

  13. Accuracy and safety of ward based pleural ultrasound in the Australian healthcare system.

    Science.gov (United States)

    Hammerschlag, Gary; Denton, Matthew; Wallbridge, Peter; Irving, Louis; Hew, Mark; Steinfort, Daniel

    2017-04-01

    Ultrasound has been shown to improve the accuracy and safety of pleural procedures. Studies to date have been performed in large, specialized units, where pleural procedures are performed by a small number of highly specialized physicians. There are no studies examining the safety and accuracy of ultrasound in the Australian healthcare system where procedures are performed by junior doctors with a high staff turnover. We performed a retrospective review of the ultrasound database in the Respiratory Department at the Royal Melbourne Hospital to determine accuracy and complications associated pleural procedures. A total of 357 ultrasounds were performed between October 2010 and June 2013. Accuracy of pleural procedures was 350 of 356 (98.3%). Aspiration of pleural fluid was successful in 121 of 126 (96%) of patients. Two (0.9%) patients required chest tube insertion for management of pneumothorax. There were no recorded pleural infections, haemorrhage or viscera puncture. Ward-based ultrasound for pleural procedures is safe and accurate when performed by appropriately trained and supported junior medical officers. Our findings support this model of pleural service care in the Australian healthcare system. © 2016 Asian Pacific Society of Respirology.

  14. Inference of Altimeter Accuracy on Along-track Gravity Anomaly Recovery

    Directory of Open Access Journals (Sweden)

    LI Yang

    2015-04-01

    Full Text Available A correlation model between along-track gravity anomaly accuracy, spatial resolution and altimeter accuracy is proposed. This new model is based on along-track gravity anomaly recovery and resolution estimation. Firstly, an error propagation formula of along-track gravity anomaly is derived from the principle of satellite altimetry. Then the mathematics between the SNR (signal to noise ratio and cross spectral coherence is deduced. The analytical correlation between altimeter accuracy and spatial resolution is finally obtained from the results above. Numerical simulation results show that along-track gravity anomaly accuracy is proportional to altimeter accuracy, while spatial resolution has a power relation with altimeter accuracy. e.g., with altimeter accuracy improving m times, gravity anomaly accuracy improves m times while spatial resolution improves m0.4644 times. This model is verified by real-world data.

  15. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    Science.gov (United States)

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-03-27

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  16. Physician involvement enhances coding accuracy to ensure national standards: an initiative to improve awareness among new junior trainees.

    Science.gov (United States)

    Nallasivan, S; Gillott, T; Kamath, S; Blow, L; Goddard, V

    2011-06-01

    Record Keeping Standards is a development led by the Royal College of Physicians of London (RCP) Health Informatics Unit and funded by the National Health Service (NHS) Connecting for Health. A supplementary report produced by the RCP makes a number of recommendations based on a study held at an acute hospital trust. We audited the medical notes and coding to assess the accuracy, documentation by the junior doctors and also to correlate our findings with the RCP audit. Northern Lincolnshire & Goole Hospitals NHS Foundation Trust has 114,000 'finished consultant episodes' per year. A total of 100 consecutive medical (50) and rheumatology (50) discharges from Diana Princess of Wales Hospital from August-October 2009 were reviewed. The results showed an improvement in coding accuracy (10% errors), comparable to the RCP audit but with 5% documentation errors. Physician involvement needs enhancing to improve the effectiveness and to ensure clinical safety.

  17. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    Science.gov (United States)

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p ultrasound, respectively (p power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  18. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    Science.gov (United States)

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  19. Improvement of CD-SEM mark position measurement accuracy

    Science.gov (United States)

    Kasa, Kentaro; Fukuhara, Kazuya

    2014-04-01

    CD-SEM is now attracting attention as a tool that can accurately measure positional error of device patterns. However, the measurement accuracy can get worse due to pattern asymmetry as in the case of image based overlay (IBO) and diffraction based overlay (DBO). For IBO and DBO, a way of correcting the inaccuracy arising from measurement patterns was suggested. For CD-SEM, although a way of correcting CD bias was proposed, it has not been argued how to correct the inaccuracy arising from pattern asymmetry using CD-SEM. In this study we will propose how to quantify and correct the measurement inaccuracy affected by pattern asymmetry.

  20. Explicit area-based accuracy assessment for mangrove tree crown delineation using Geographic Object-Based Image Analysis (GEOBIA)

    Science.gov (United States)

    Kamal, Muhammad; Johansen, Kasper

    2017-10-01

    Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.

  1. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    Science.gov (United States)

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  2. Improving the accuracy of Møller-Plesset perturbation theory with neural networks

    Science.gov (United States)

    McGibbon, Robert T.; Taube, Andrew G.; Donchev, Alexander G.; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L.; Shaw, David E.

    2017-10-01

    Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol-1 (root-mean-square error 0.09 kcal mol-1), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.

  3. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    Directory of Open Access Journals (Sweden)

    HosseiniAliabadi S. J.

    2015-06-01

    Full Text Available Background: The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective: A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method: Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result: The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion: This system can be utilized in large scale environmental monitoring with a higher accuracy

  4. Phase accuracy evaluation for phase-shifting fringe projection profilometry based on uniform-phase coded image

    Science.gov (United States)

    Zhang, Chunwei; Zhao, Hong; Zhu, Qian; Zhou, Changquan; Qiao, Jiacheng; Zhang, Lu

    2018-06-01

    Phase-shifting fringe projection profilometry (PSFPP) is a three-dimensional (3D) measurement technique widely adopted in industry measurement. It recovers the 3D profile of measured objects with the aid of the fringe phase. The phase accuracy is among the dominant factors that determine the 3D measurement accuracy. Evaluation of the phase accuracy helps refine adjustable measurement parameters, contributes to evaluating the 3D measurement accuracy, and facilitates improvement of the measurement accuracy. Although PSFPP has been deeply researched, an effective, easy-to-use phase accuracy evaluation method remains to be explored. In this paper, methods based on the uniform-phase coded image (UCI) are presented to accomplish phase accuracy evaluation for PSFPP. These methods work on the principle that the phase value of a UCI can be manually set to be any value, and once the phase value of a UCI pixel is the same as that of a pixel of a corresponding sinusoidal fringe pattern, their phase accuracy values are approximate. The proposed methods provide feasible approaches to evaluating the phase accuracy for PSFPP. Furthermore, they can be used to experimentally research the property of the random and gamma phase errors in PSFPP without the aid of a mathematical model to express random phase error or a large-step phase-shifting algorithm. In this paper, some novel and interesting phenomena are experimentally uncovered with the aid of the proposed methods.

  5. An Improved Droop Control Method for DC Microgrids Based on Low Bandwidth Communication with DC Bus Voltage Restoration and Enhanced Current Sharing Accuracy

    DEFF Research Database (Denmark)

    Lu, Xiaonan; Guerrero, Josep M.; Sun, Kai

    2014-01-01

    Droop control is the basic control method for load current sharing in dc microgrid applications. The conventional dc droop control method is realized by linearly reducing the dc output voltage as the output current increases. This method has two limitations. First, with the consideration of line...... resistance in a droop-controlled dc microgrid, since the output voltage of each converter cannot be exactly the same, the output current sharing accuracy is degraded. Second, the DC bus voltage deviation increases with the load due to the droop action. In this paper, in order to improve the performance......, and the LBC system is only used for changing the values of the dc voltage and current. Hence, a decentralized control scheme is accomplished. The simulation test based on Matlab/Simulink and the experimental validation based on a 2×2.2 kW prototype were implemented to demonstrate the proposed approach....

  6. Composition-based statistics and translated nucleotide searches: Improving the TBLASTN module of BLAST

    Directory of Open Access Journals (Sweden)

    Schäffer Alejandro A

    2006-12-01

    Full Text Available Abstract Background TBLASTN is a mode of operation for BLAST that aligns protein sequences to a nucleotide database translated in all six frames. We present the first description of the modern implementation of TBLASTN, focusing on new techniques that were used to implement composition-based statistics for translated nucleotide searches. Composition-based statistics use the composition of the sequences being aligned to generate more accurate E-values, which allows for a more accurate distinction between true and false matches. Until recently, composition-based statistics were available only for protein-protein searches. They are now available as a command line option for recent versions of TBLASTN and as an option for TBLASTN on the NCBI BLAST web server. Results We evaluate the statistical and retrieval accuracy of the E-values reported by a baseline version of TBLASTN and by two variants that use different types of composition-based statistics. To test the statistical accuracy of TBLASTN, we ran 1000 searches using scrambled proteins from the mouse genome and a database of human chromosomes. To test retrieval accuracy, we modernize and adapt to translated searches a test set previously used to evaluate the retrieval accuracy of protein-protein searches. We show that composition-based statistics greatly improve the statistical accuracy of TBLASTN, at a small cost to the retrieval accuracy. Conclusion TBLASTN is widely used, as it is common to wish to compare proteins to chromosomes or to libraries of mRNAs. Composition-based statistics improve the statistical accuracy, and therefore the reliability, of TBLASTN results. The algorithms used by TBLASTN are not widely known, and some of the most important are reported here. The data used to test TBLASTN are available for download and may be useful in other studies of translated search algorithms.

  7. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Directory of Open Access Journals (Sweden)

    Michael F Sloma

    2017-11-01

    Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  8. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Science.gov (United States)

    Sloma, Michael F; Mathews, David H

    2017-11-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  9. Geoid undulation accuracy

    Science.gov (United States)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  10. Inclusion of Population-specific Reference Panel from India to the 1000 Genomes Phase 3 Panel Improves Imputation Accuracy.

    Science.gov (United States)

    Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R

    2017-07-27

    Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.

  11. Improving protein fold recognition and structural class prediction accuracies using physicochemical properties of amino acids.

    Science.gov (United States)

    Raicar, Gaurav; Saini, Harsh; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2016-08-07

    Predicting the three-dimensional (3-D) structure of a protein is an important task in the field of bioinformatics and biological sciences. However, directly predicting the 3-D structure from the primary structure is hard to achieve. Therefore, predicting the fold or structural class of a protein sequence is generally used as an intermediate step in determining the protein's 3-D structure. For protein fold recognition (PFR) and structural class prediction (SCP), two steps are required - feature extraction step and classification step. Feature extraction techniques generally utilize syntactical-based information, evolutionary-based information and physicochemical-based information to extract features. In this study, we explore the importance of utilizing the physicochemical properties of amino acids for improving PFR and SCP accuracies. For this, we propose a Forward Consecutive Search (FCS) scheme which aims to strategically select physicochemical attributes that will supplement the existing feature extraction techniques for PFR and SCP. An exhaustive search is conducted on all the existing 544 physicochemical attributes using the proposed FCS scheme and a subset of physicochemical attributes is identified. Features extracted from these selected attributes are then combined with existing syntactical-based and evolutionary-based features, to show an improvement in the recognition and prediction performance on benchmark datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Aerosol characteristics inversion based on the improved lidar ratio profile with the ground-based rotational Raman-Mie lidar

    Science.gov (United States)

    Ji, Hongzhu; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan

    2018-06-01

    An iterative method, based on a derived inverse relationship between atmospheric backscatter coefficient and aerosol lidar ratio, is proposed to invert the lidar ratio profile and aerosol extinction coefficient. The feasibility of this method is investigated theoretically and experimentally. Simulation results show the inversion accuracy of aerosol optical properties for iterative method can be improved in the near-surface aerosol layer and the optical thick layer. Experimentally, as a result of the reduced insufficiency error and incoherence error, the aerosol optical properties with higher accuracy can be obtained in the near-surface region and the region of numerical derivative distortion. In addition, the particle component can be distinguished roughly based on this improved lidar ratio profile.

  13. New polymorphic tetranucleotide microsatellites improve scoring accuracy in the bottlenose dolphin Tursiops aduncus

    NARCIS (Netherlands)

    Nater, Alexander; Kopps, Anna M.; Kruetzen, Michael

    We isolated and characterized 19 novel tetranucleotide microsatellite markers in the Indo-Pacific bottlenose dolphin (Tursiops aduncus) in order to improve genotyping accuracy in applications like large-scale population-wide paternity and relatedness assessments. One hundred T. aduncus from Shark

  14. Improving the accuracy of ionization chamber dosimetry in small megavoltage x-ray fields

    Science.gov (United States)

    McNiven, Andrea L.

    The dosimetry of small x-ray fields is difficult, but important, in many radiation therapy delivery methods. The accuracy of ion chambers for small field applications, however, is limited due to the relatively large size of the chamber with respect to the field size, leading to partial volume effects, lateral electronic disequilibrium and calibration difficulties. The goal of this dissertation was to investigate the use of ionization chambers for the purpose of dosimetry in small megavoltage photon beams with the aim of improving clinical dose measurements in stereotactic radiotherapy and helical tomotherapy. A new method for the direct determination of the sensitive volume of small-volume ion chambers using micro computed tomography (muCT) was investigated using four nominally identical small-volume (0.56 cm3) cylindrical ion chambers. Agreement between their measured relative volume and ionization measurements (within 2%) demonstrated the feasibility of volume determination through muCT. Cavity-gas calibration coefficients were also determined, demonstrating the promise for accurate ion chamber calibration based partially on muCT. The accuracy of relative dose factor measurements in 6MV stereotactic x-ray fields (5 to 40mm diameter) was investigated using a set of prototype plane-parallel ionization chambers (diameters of 2, 4, 10 and 20mm). Chamber and field size specific correction factors ( CSFQ ), that account for perturbation of the secondary electron fluence, were calculated using Monte Carlo simulation methods (BEAM/EGSnrc simulations). These correction factors (e.g. CSFQ = 1.76 (2mm chamber, 5mm field) allow for accurate relative dose factor (RDF) measurement when applied to ionization readings, under conditions of electronic disequilibrium. With respect to the dosimetry of helical tomotherapy, a novel application of the ion chambers was developed to characterize the fan beam size and effective dose rate. Characterization was based on an adaptation of the

  15. Design of SVC Controller Based on Improved Biogeography-Based Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Feifei Dong

    2014-01-01

    Full Text Available Considering that common subsynchronous resonance controllers cannot adapt to the characteristics of the time-varying and nonlinear behavior of a power system, the cosine migration model, the improved migration operator, and the mutative scale of chaos and Cauchy mutation strategy are introduced into an improved biogeography-based optimization (IBBO algorithm in order to design an optimal subsynchronous damping controller based on the mechanism of suppressing SSR by static var compensator (SVC. The effectiveness of the improved controller is verified by eigenvalue analysis and electromagnetic simulations. The simulation results of Jinjie plant indicate that the subsynchronous damping controller optimized by the IBBO algorithm can remarkably improve the damping of torsional modes and thus effectively depress SSR, and ensure the safety and stability of units and power grid operation. Moreover, the IBBO algorithm has the merits of a faster searching speed and higher searching accuracy in seeking the optimal control parameters over traditional algorithms, such as BBO algorithm, PSO algorithm, and GA algorithm.

  16. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  17. Coval: improving alignment quality and variant calling accuracy for next-generation sequencing data.

    Directory of Open Access Journals (Sweden)

    Shunichi Kosugi

    Full Text Available Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in 'targeted' alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/.

  18. MO-DE-210-05: Improved Accuracy of Liver Feature Motion Estimation in B-Mode Ultrasound for Image-Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, T; Bamber, J; Harris, E [The Institute of Cancer Research & Royal Marsden, Sutton and London (United Kingdom)

    2015-06-15

    Purpose: In similarity-measure based motion estimation incremental tracking (or template update) is challenging due to quantization, bias and accumulation of tracking errors. A method is presented which aims to improve the accuracy of incrementally tracked liver feature motion in long ultrasound sequences. Methods: Liver ultrasound data from five healthy volunteers under free breathing were used (15 to 17 Hz imaging rate, 2.9 to 5.5 minutes in length). A normalised cross-correlation template matching algorithm was implemented to estimate tissue motion. Blood vessel motion was manually annotated for comparison with three tracking code implementations: (i) naive incremental tracking (IT), (ii) IT plus a similarity threshold (ST) template-update method and (iii) ST coupled with a prediction-based state observer, known as the alpha-beta filter (ABST). Results: The ABST method produced substantial improvements in vessel tracking accuracy for two-dimensional vessel motion ranging from 7.9 mm to 40.4 mm (with mean respiratory period: 4.0 ± 1.1 s). The mean and 95% tracking errors were 1.6 mm and 1.4 mm, respectively (compared to 6.2 mm and 9.1 mm, respectively for naive incremental tracking). Conclusions: High confidence in the output motion estimation data is required for ultrasound-based motion estimation for radiation therapy beam tracking and gating. The method presented has potential for monitoring liver vessel translational motion in high frame rate B-mode data with the required accuracy. This work is support by Cancer Research UK Programme Grant C33589/A19727.

  19. Accuracy of MFCC-Based Speaker Recognition in Series 60 Device

    Directory of Open Access Journals (Sweden)

    Pasi Fränti

    2005-10-01

    Full Text Available A fixed point implementation of speaker recognition based on MFCC signal processing is considered. We analyze the numerical error of the MFCC and its effect on the recognition accuracy. Techniques to reduce the information loss in a converted fixed point implementation are introduced. We increase the signal processing accuracy by adjusting the ratio of presentation accuracy of the operators and the signal. The signal processing error is found out to be more important to the speaker recognition accuracy than the error in the classification algorithm. The results are verified by applying the alternative technique to speech data. We also discuss the specific programming requirements set up by the Symbian and Series 60.

  20. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Science.gov (United States)

    Stroeymeyt, Nathalie; Giurfa, Martin; Franks, Nigel R

    2010-09-29

    Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade-off classically observed during emigrations. These findings should be taken into account

  1. Accuracy of a hexapod parallel robot kinematics based external fixator.

    Science.gov (United States)

    Faschingbauer, Maximilian; Heuer, Hinrich J D; Seide, Klaus; Wendlandt, Robert; Münch, Matthias; Jürgens, Christian; Kirchner, Rainer

    2015-12-01

    Different hexapod-based external fixators are increasingly used to treat bone deformities and fractures. Accuracy has not been measured sufficiently for all models. An infrared tracking system was applied to measure positioning maneuvers with a motorized Precision Hexapod® fixator, detecting three-dimensional positions of reflective balls mounted in an L-arrangement on the fixator, simulating bone directions. By omitting one dimension of the coordinates, projections were simulated as if measured on standard radiographs. Accuracy was calculated as the absolute difference between targeted and measured positioning values. In 149 positioning maneuvers, the median values for positioning accuracy of translations and rotations (torsions/angulations) were below 0.3 mm and 0.2° with quartiles ranging from -0.5 mm to 0.5 mm and -1.0° to 0.9°, respectively. The experimental setup was found to be precise and reliable. It can be applied to compare different hexapod-based fixators. Accuracy of the investigated hexapod system was high. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    Science.gov (United States)

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  3. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    Directory of Open Access Journals (Sweden)

    Peilu Liu

    2017-10-01

    Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  4. Improving the accuracy of effect-directed analysis: the role of bioavailability.

    Science.gov (United States)

    You, Jing; Li, Huizhen

    2017-12-13

    Aquatic ecosystems have been suffering from contamination by multiple stressors. Traditional chemical-based risk assessment usually fails to explain the toxicity contributions from contaminants that are not regularly monitored or that have an unknown identity. Diagnosing the causes of noted adverse outcomes in the environment is of great importance in ecological risk assessment and in this regard effect-directed analysis (EDA) has been designed to fulfill this purpose. The EDA approach is now increasingly used in aquatic risk assessment owing to its specialty in achieving effect-directed nontarget analysis; however, a lack of environmental relevance makes conventional EDA less favorable. In particular, ignoring the bioavailability in EDA may cause a biased and even erroneous identification of causative toxicants in a mixture. Taking bioavailability into consideration is therefore of great importance to improve the accuracy of EDA diagnosis. The present article reviews the current status and applications of EDA practices that incorporate bioavailability. The use of biological samples is the most obvious way to include bioavailability into EDA applications, but its development is limited due to the small sample size and lack of evidence for metabolizable compounds. Bioavailability/bioaccessibility-based extraction (bioaccessibility-directed and partitioning-based extraction) and passive-dosing techniques are recommended to be used to integrate bioavailability into EDA diagnosis in abiotic samples. Lastly, the future perspectives of expanding and standardizing the use of biological samples and bioavailability-based techniques in EDA are discussed.

  5. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910

  6. Hybrid Brain-Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review.

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  7. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Directory of Open Access Journals (Sweden)

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  8. Improved Extreme Learning Machine based on the Sensitivity Analysis

    Science.gov (United States)

    Cui, Licheng; Zhai, Huawei; Wang, Benchao; Qu, Zengtang

    2018-03-01

    Extreme learning machine and its improved ones is weak in some points, such as computing complex, learning error and so on. After deeply analyzing, referencing the importance of hidden nodes in SVM, an novel analyzing method of the sensitivity is proposed which meets people’s cognitive habits. Based on these, an improved ELM is proposed, it could remove hidden nodes before meeting the learning error, and it can efficiently manage the number of hidden nodes, so as to improve the its performance. After comparing tests, it is better in learning time, accuracy and so on.

  9. Diagnostic accuracy of tablet-based software for the detection of concussion.

    Science.gov (United States)

    Yang, Suosuo; Flores, Benjamin; Magal, Rotem; Harris, Kyrsti; Gross, Jonathan; Ewbank, Amy; Davenport, Sasha; Ormachea, Pablo; Nasser, Waleed; Le, Weidong; Peacock, W Frank; Katz, Yael; Eagleman, David M

    2017-01-01

    Despite the high prevalence of traumatic brain injuries (TBI), there are few rapid and straightforward tests to improve its assessment. To this end, we developed a tablet-based software battery ("BrainCheck") for concussion detection that is well suited to sports, emergency department, and clinical settings. This article is a study of the diagnostic accuracy of BrainCheck. We administered BrainCheck to 30 TBI patients and 30 pain-matched controls at a hospital Emergency Department (ED), and 538 healthy individuals at 10 control test sites. We compared the results of the tablet-based assessment against physician diagnoses derived from brain scans, clinical examination, and the SCAT3 test, a traditional measure of TBI. We found consistent distributions of normative data and high test-retest reliability. Based on these assessments, we defined a composite score that distinguishes TBI from non-TBI individuals with high sensitivity (83%) and specificity (87%). We conclude that our testing application provides a rapid, portable testing method for TBI.

  10. Diagnostic accuracy of tablet-based software for the detection of concussion.

    Directory of Open Access Journals (Sweden)

    Suosuo Yang

    Full Text Available Despite the high prevalence of traumatic brain injuries (TBI, there are few rapid and straightforward tests to improve its assessment. To this end, we developed a tablet-based software battery ("BrainCheck" for concussion detection that is well suited to sports, emergency department, and clinical settings. This article is a study of the diagnostic accuracy of BrainCheck. We administered BrainCheck to 30 TBI patients and 30 pain-matched controls at a hospital Emergency Department (ED, and 538 healthy individuals at 10 control test sites. We compared the results of the tablet-based assessment against physician diagnoses derived from brain scans, clinical examination, and the SCAT3 test, a traditional measure of TBI. We found consistent distributions of normative data and high test-retest reliability. Based on these assessments, we defined a composite score that distinguishes TBI from non-TBI individuals with high sensitivity (83% and specificity (87%. We conclude that our testing application provides a rapid, portable testing method for TBI.

  11. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    Science.gov (United States)

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  12. An accuracy measurement method for star trackers based on direct astronomic observation.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  13. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    Science.gov (United States)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  14. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    Science.gov (United States)

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  15. Does PACS improve diagnostic accuracy in chest radiograph interpretations in clinical practice?

    International Nuclear Information System (INIS)

    Hurlen, Petter; Borthne, Arne; Dahl, Fredrik A.; Østbye, Truls; Gulbrandsen, Pål

    2012-01-01

    Objectives: To assess the impact of a Picture Archiving and Communication System (PACS) on the diagnostic accuracy of the interpretation of chest radiology examinations in a “real life” radiology setting. Materials and methods: During a period before PACS was introduced to radiologists, when images were still interpreted on film and reported on paper, images and reports were also digitally stored in an image database. The same database was used after the PACS introduction. This provided a unique opportunity to conduct a blinded retrospective study, comparing sensitivity (the main outcome parameter) in the pre and post-PACS periods. We selected 56 digitally stored chest radiograph examinations that were originally read and reported on film, and 66 examinations that were read and reported on screen 2 years after the PACS introduction. Each examination was assigned a random number, and both reports and images were scored independently for pathological findings. The blinded retrospective score for the original reports were then compared with the score for the images (the gold standard). Results: Sensitivity was improved after the PACS introduction. When both certain and uncertain findings were included, this improvement was statistically significant. There were no other statistically significant changes. Conclusion: The result is consistent with prospective studies concluding that diagnostic accuracy is at least not reduced after PACS introduction. The sensitivity may even be improved.

  16. Consensus-based reporting standards for diagnostic test accuracy studies for paratuberculosis in ruminants

    DEFF Research Database (Denmark)

    Gardner, Ian A.; Nielsen, Søren Saxmose; Whittington, Richard

    2011-01-01

    The Standards for Reporting of Diagnostic Accuracy (STARD) statement (www.stard-statement.org) was developed to encourage complete and transparent reporting of key elements of test accuracy studies in human medicine. The statement was motivated by widespread evidence of bias in test accuracy...... studies and the finding that incomplete or absent reporting of items in the STARD checklist was associated with overly optimistic estimates of test performance characteristics. Although STARD principles apply broadly, specific guidelines do not exist to account for unique considerations in livestock...... for Reporting of Animal Diagnostic Accuracy Studies for paratuberculosis), should facilitate improved quality of reporting of the design, conduct and results of paratuberculosis test accuracy studies which were identified as “poor” in a review published in 2008 in Veterinary Microbiology...

  17. Modification of the fast fourier transform-based method by signal mirroring for accuracy quantification of thermal-hydraulic system code

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Tae Wook; Jeong, Jae Jun [School of Mechanical Engineering, Pusan National University, Busan (Korea, Republic of); Choi, Ki Yong [Korea Atomic Energy Research Institute (KAERI), Daejeon (Korea, Republic of)

    2017-08-15

    A thermal–hydraulic system code is an essential tool for the design and safety analysis of a nuclear power plant, and its accuracy quantification is very important for the code assessment and applications. The fast Fourier transform-based method (FFTBM) by signal mirroring (FFTBM-SM) has been used to quantify the accuracy of a system code by using a comparison of the experimental data and the calculated results. The method is an improved version of the FFTBM, and it is known that the FFTBM-SM judges the code accuracy in a more consistent and unbiased way. However, in some applications, unrealistic results have been obtained. In this study, it was found that accuracy quantification by FFTBM-SM is dependent on the frequency spectrum of the fast Fourier transform of experimental and error signals. The primary objective of this study is to reduce the frequency dependency of FFTBM-SM evaluation. For this, it was proposed to reduce the cut off frequency, which was introduced to cut off spurious contributions, in FFTBM-SM. A method to determine an appropriate cut off frequency was also proposed. The FFTBM-SM with the modified cut off frequency showed a significant improvement of the accuracy quantification.

  18. Modification of the fast fourier transform-based method by signal mirroring for accuracy quantification of thermal-hydraulic system code

    International Nuclear Information System (INIS)

    Ha, Tae Wook; Jeong, Jae Jun; Choi, Ki Yong

    2017-01-01

    A thermal–hydraulic system code is an essential tool for the design and safety analysis of a nuclear power plant, and its accuracy quantification is very important for the code assessment and applications. The fast Fourier transform-based method (FFTBM) by signal mirroring (FFTBM-SM) has been used to quantify the accuracy of a system code by using a comparison of the experimental data and the calculated results. The method is an improved version of the FFTBM, and it is known that the FFTBM-SM judges the code accuracy in a more consistent and unbiased way. However, in some applications, unrealistic results have been obtained. In this study, it was found that accuracy quantification by FFTBM-SM is dependent on the frequency spectrum of the fast Fourier transform of experimental and error signals. The primary objective of this study is to reduce the frequency dependency of FFTBM-SM evaluation. For this, it was proposed to reduce the cut off frequency, which was introduced to cut off spurious contributions, in FFTBM-SM. A method to determine an appropriate cut off frequency was also proposed. The FFTBM-SM with the modified cut off frequency showed a significant improvement of the accuracy quantification

  19. Improved Recommendations Based on Trust Relationships in Social Networks

    Directory of Open Access Journals (Sweden)

    Hao Tian

    2017-03-01

    Full Text Available In order to alleviate the pressure of information overload and enhance consumer satisfaction, personalization recommendation has become increasingly popular in recent years. As a result, various approaches for recommendation have been proposed in the past few years. However, traditional recommendation methods are still troubled with typical issues such as cold start, sparsity, and low accuracy. To address these problems, this paper proposed an improved recommendation method based on trust relationships in social networks to improve the performance of recommendations. In particular, we define trust relationship afresh and consider several representative factors in the formalization of trust relationships. To verify the proposed approach comprehensively, this paper conducted experiments in three ways. The experimental results show that our proposed approach leads to a substantial increase in prediction accuracy and is very helpful in dealing with cold start and sparsity.

  20. Improving the accuracy of acetabular cup implantation using a bulls-eye spirit level.

    Science.gov (United States)

    Macdonald, Duncan; Gupta, Sanjay; Ohly, Nicholas E; Patil, Sanjeev; Meek, R; Mohammed, Aslam

    2011-01-01

    Acetabular introducers have a built-in inclination of 45 degrees to the handle shaft. With patients in the lateral position, surgeons aim to align the introducer shaft vertical to the floor to implant the acetabulum at 45 degrees. We aimed to determine if a bulls-eye spirit level attached to an introducer improved the accuracy of implantation. A small circular bulls-eye spirit level was attached to the handle of an acetabular introducer. A saw bone hemipelvis was fixed to a horizontal, flat surface. A cement substitute was placed in the acetabulum and subjects were asked to implant a polyethylene cup, aiming to obtain an angle of inclination of 45 degrees. Two attempts were made with the spirit level masked and two with it unmasked. The distance of the air bubble from the spirit level's center was recorded by a single assessor. The angle of inclination of the acetabular component was then calculated. Subjects included both orthopedic consultants and trainees. Twenty-five subjects completed the study. Accuracy of acetabular implantation when using the unmasked spirit level improved significantly in all grades of surgeon. With the spirit level masked, 12 out of 50 attempts were accurate at 45 degrees inclination; 11 out of 50 attempts were "open," with greater than 45 degrees of inclination, and 27 were "closed," with less than 45 degrees. With the spirit level visible, all subjects achieved an inclination angle of exactly 45 degrees. A simple device attached to the handle of an acetabular introducer can significantly improve the accuracy of implantation of a cemented cup into a saw bone pelvis in the lateral position.

  1. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  2. Accuracy of the improved quasistatic space-time method checked with experiment

    International Nuclear Information System (INIS)

    Kugler, G.; Dastur, A.R.

    1976-10-01

    Recent experiments performed at the Savannah River Laboratory have made it possible to check the accuracy of numerical methods developed to simulate space-dependent neutron transients. The experiments were specifically designed to emphasize delayed neutron holdback. The CERBERUS code using the IQS (Improved Quasistatic) method has been developed to provide a practical yet accurate tool for spatial kinetics calculations of CANDU reactors. The code was tested on the Savannah River experiments and excellent agreement was obtained. (author)

  3. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections.

    Science.gov (United States)

    Linsenmeyer, Katherine; Strymish, Judith; Gupta, Kalpana

    2015-12-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  4. Improved Accuracy of Myocardial Perfusion SPECT for the Detection of Coronary Artery Disease by Utilizing a Support Vector Machines Algorithm

    Science.gov (United States)

    Arsanjani, Reza; Xu, Yuan; Dey, Damini; Fish, Matthews; Dorbala, Sharmila; Hayes, Sean; Berman, Daniel; Germano, Guido; Slomka, Piotr

    2012-01-01

    We aimed to improve the diagnostic accuracy of automatic myocardial perfusion SPECT (MPS) interpretation analysis for prediction of coronary artery disease (CAD) by integrating several quantitative perfusion and functional variables for non-corrected (NC) data by support vector machines (SVM), a computer method for machine learning. Methods 957 rest/stress 99mtechnetium gated MPS NC studies from 623 consecutive patients with correlating invasive coronary angiography and 334 with low likelihood of CAD (LLK < 5% ) were assessed. Patients with stenosis ≥ 50% in left main or ≥ 70% in all other vessels were considered abnormal. Total perfusion deficit (TPD) was computed automatically. In addition, ischemic changes (ISCH) and ejection fraction changes (EFC) between stress and rest were derived by quantitative software. The SVM was trained using a group of 125 pts (25 LLK, 25 0-, 25 1-, 25 2- and 25 3-vessel CAD) using above quantitative variables and second order polynomial fitting. The remaining patients (N = 832) were categorized based on probability estimates, with CAD defined as (probability estimate ≥ 0.50). The diagnostic accuracy of SVM was also compared to visual segmental scoring by two experienced readers. Results Sensitivity of SVM (84%) was significantly better than ISCH (75%, p < 0.05) and EFC (31%, p < 0.05). Specificity of SVM (88%) was significantly better than that of TPD (78%, p < 0.05) and EFC (77%, p < 0.05). Diagnostic accuracy of SVM (86%) was significantly better than TPD (81%), ISCH (81%), or EFC (46%) (p < 0.05 for all). The Receiver-operator-characteristic area-under-the-curve (ROC-AUC) for SVM (0.92) was significantly better than TPD (0.90), ISCH (0.87), and EFC (0.60) (p < 0.001 for all). Diagnostic accuracy of SVM was comparable to the overall accuracy of both visual readers (85% vs. 84%, p < 0.05). ROC-AUC for SVM (0.92) was significantly better than that of both visual readers (0.87 and 0.88, p < 0.03). Conclusion Computational

  5. TCS: a new multiple sequence alignment reliability measure to estimate alignment accuracy and improve phylogenetic tree reconstruction.

    Science.gov (United States)

    Chang, Jia-Ming; Di Tommaso, Paolo; Notredame, Cedric

    2014-06-01

    Multiple sequence alignment (MSA) is a key modeling procedure when analyzing biological sequences. Homology and evolutionary modeling are the most common applications of MSAs. Both are known to be sensitive to the underlying MSA accuracy. In this work, we show how this problem can be partly overcome using the transitive consistency score (TCS), an extended version of the T-Coffee scoring scheme. Using this local evaluation function, we show that one can identify the most reliable portions of an MSA, as judged from BAliBASE and PREFAB structure-based reference alignments. We also show how this measure can be used to improve phylogenetic tree reconstruction using both an established simulated data set and a novel empirical yeast data set. For this purpose, we describe a novel lossless alternative to site filtering that involves overweighting the trustworthy columns. Our approach relies on the T-Coffee framework; it uses libraries of pairwise alignments to evaluate any third party MSA. Pairwise projections can be produced using fast or slow methods, thus allowing a trade-off between speed and accuracy. We compared TCS with Heads-or-Tails, GUIDANCE, Gblocks, and trimAl and found it to lead to significantly better estimates of structural accuracy and more accurate phylogenetic trees. The software is available from www.tcoffee.org/Projects/tcs. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Modeling hemodynamics in intracranial aneurysms: Comparing accuracy of CFD solvers based on finite element and finite volume schemes.

    Science.gov (United States)

    Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui

    2018-06-01

    Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.

  7. Use of Low-Level Sensor Data to Improve the Accuracy of Bluetooth-Based Travel Time Estimation

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Christensen, Lars Tørholm; Krishnan, Rajesh

    2013-01-01

    by a single device. The latter situation could lead to location ambiguity and could reduce the accuracy of travel time estimation. Therefore, the accuracy of travel time estimation by Bluetooth technology depends on how location ambiguity is handled by the estimation method. The issue of multiple detection...... events in the context of travel time estimation by Bluetooth technology has been considered by various researchers. However, treatment of this issue has been simplistic. Most previous studies have used the first detection event (enter-enter) as the best estimate. No systematic analysis has been conducted...... to explore the most accurate method of travel time estimation with multiple detection events. In this study, different aspects of the Bluetooth detection zone, including size and impact on the accuracy of travel time estimation, were discussed. Four methods were applied to estimate travel time: enter...

  8. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Directory of Open Access Journals (Sweden)

    Nathalie Stroeymeyt

    Full Text Available BACKGROUND: Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. METHODOLOGY/PRINCIPAL FINDINGS: Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. CONCLUSIONS/SIGNIFICANCE: These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade

  9. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  10. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  11. Improving the Accuracy of NMR Structures of Large Proteins Using Pseudocontact Shifts as Long-Range Restraints

    Energy Technology Data Exchange (ETDEWEB)

    Gaponenko, Vadim [National Cancer Institute, Structural Biophysics Laboratory (United States); Sarma, Siddhartha P. [Indian Institute of Science, Molecular Biophysics Unit (India); Altieri, Amanda S. [National Cancer Institute, Structural Biophysics Laboratory (United States); Horita, David A. [Wake Forest University School of Medicine, Department of Biochemistry (United States); Li, Jess; Byrd, R. Andrew [National Cancer Institute, Structural Biophysics Laboratory (United States)], E-mail: rabyrd@ncifcrf.gov

    2004-03-15

    We demonstrate improved accuracy in protein structure determination for large ({>=}30 kDa), deuterated proteins (e.g. STAT4{sub NT}) via the combination of pseudocontact shifts for amide and methyl protons with the available NOEs in methyl-protonated proteins. The improved accuracy is cross validated by Q-factors determined from residual dipolar couplings measured as a result of magnetic susceptibility alignment. The paramagnet is introduced via binding to thiol-reactive EDTA, and multiple sites can be serially engineered to obtain data from alternative orientations of the paramagnetic anisotropic susceptibility tensor. The technique is advantageous for systems where the target protein has strong interactions with known alignment media.

  12. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    DEFF Research Database (Denmark)

    Stilling, M; Kold, S; de Raedt, S

    2012-01-01

    The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear.......The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear....

  13. Selecting fillers on emotional appearance improves lineup identification accuracy.

    Science.gov (United States)

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  15. Energy expenditure prediction via a footwear-based physical activity monitor: Accuracy and comparison to other devices

    Science.gov (United States)

    Dannecker, Kathryn

    2011-12-01

    Accurately estimating free-living energy expenditure (EE) is important for monitoring or altering energy balance and quantifying levels of physical activity. The use of accelerometers to monitor physical activity and estimate physical activity EE is common in both research and consumer settings. Recent advances in physical activity monitors include the ability to identify specific activities (e.g. stand vs. walk) which has resulted in improved EE estimation accuracy. Recently, a multi-sensor footwear-based physical activity monitor that is capable of achieving 98% activity identification accuracy has been developed. However, no study has compared the EE estimation accuracy for this monitor and compared this accuracy to other similar devices. Purpose . To determine the accuracy of physical activity EE estimation of a footwear-based physical activity monitor that uses an embedded accelerometer and insole pressure sensors and to compare this accuracy against a variety of research and consumer physical activity monitors. Methods. Nineteen adults (10 male, 9 female), mass: 75.14 (17.1) kg, BMI: 25.07(4.6) kg/m2 (mean (SD)), completed a four hour stay in a room calorimeter. Participants wore a footwear-based physical activity monitor, as well as three physical activity monitoring devices used in research: hip-mounted Actical and Actigraph accelerometers and a multi-accelerometer IDEEA device with sensors secured to the limb and chest. In addition, participants wore two consumer devices: Philips DirectLife and Fitbit. Each individual performed a series of randomly assigned and ordered postures/activities including lying, sitting (quietly and using a computer), standing, walking, stepping, cycling, sweeping, as well as a period of self-selected activities. We developed branched (i.e. activity specific) linear regression models to estimate EE from the footwear-based device, and we used the manufacturer's software to estimate EE for all other devices. Results. The shoe-based

  16. Improving the accuracy of energy baseline models for commercial buildings with occupancy data

    International Nuclear Information System (INIS)

    Liang, Xin; Hong, Tianzhen; Shen, Geoffrey Qiping

    2016-01-01

    Highlights: • We evaluated several baseline models predicting energy use in buildings. • Including occupancy data improved accuracy of baseline model prediction. • Occupancy is highly correlated with energy use in buildings. • This simple approach can be used in decision makings of energy retrofit projects. - Abstract: More than 80% of energy is consumed during operation phase of a building’s life cycle, so energy efficiency retrofit for existing buildings is considered a promising way to reduce energy use in buildings. The investment strategies of retrofit depend on the ability to quantify energy savings by “measurement and verification” (M&V), which compares actual energy consumption to how much energy would have been used without retrofit (called the “baseline” of energy use). Although numerous models exist for predicting baseline of energy use, a critical limitation is that occupancy has not been included as a variable. However, occupancy rate is essential for energy consumption and was emphasized by previous studies. This study develops a new baseline model which is built upon the Lawrence Berkeley National Laboratory (LBNL) model but includes the use of building occupancy data. The study also proposes metrics to quantify the accuracy of prediction and the impacts of variables. However, the results show that including occupancy data does not significantly improve the accuracy of the baseline model, especially for HVAC load. The reasons are discussed further. In addition, sensitivity analysis is conducted to show the influence of parameters in baseline models. The results from this study can help us understand the influence of occupancy on energy use, improve energy baseline prediction by including the occupancy factor, reduce risks of M&V and facilitate investment strategies of energy efficiency retrofit.

  17. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    Science.gov (United States)

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  18. Travel-time source-specific station correction improves location accuracy

    Science.gov (United States)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  19. Analysis of spatial distribution of land cover maps accuracy

    Science.gov (United States)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain

  20. Preoperative Measurement of Tibial Resection in Total Knee Arthroplasty Improves Accuracy of Postoperative Limb Alignment Restoration

    Directory of Open Access Journals (Sweden)

    Pei-Hui Wu

    2016-01-01

    Conclusions: Using conventional surgical instruments, preoperative measurement of resection thickness of the tibial plateau on radiographs could improve the accuracy of conventional surgical techniques.

  1. Study on the Accuracy Improvement of the Second-Kind Fredholm Integral Equations by Using the Buffa-Christiansen Functions with MLFMA

    Directory of Open Access Journals (Sweden)

    Yue-Qian Wu

    2016-01-01

    Full Text Available Former works show that the accuracy of the second-kind integral equations can be improved dramatically by using the rotated Buffa-Christiansen (BC functions as the testing functions, and sometimes their accuracy can be even better than the first-kind integral equations. When the rotated BC functions are used as the testing functions, the discretization error of the identity operators involved in the second-kind integral equations can be suppressed significantly. However, the sizes of spherical objects which were analyzed are relatively small. Numerical capability of the method of moments (MoM for solving integral equations with the rotated BC functions is severely limited. Hence, the performance of BC functions for accuracy improvement of electrically large objects is not studied. In this paper, the multilevel fast multipole algorithm (MLFMA is employed to accelerate iterative solution of the magnetic-field integral equation (MFIE. Then a series of numerical experiments are performed to study accuracy improvement of MFIE in perfect electric conductor (PEC cases with the rotated BC as testing functions. Numerical results show that the effect of accuracy improvement by using the rotated BC as the testing functions is greatly different with curvilinear or plane triangular elements but falls off when the size of the object is large.

  2. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by

  3. Improved spring model-based collaborative indoor visible light positioning

    Science.gov (United States)

    Luo, Zhijie; Zhang, WeiNan; Zhou, GuoFu

    2016-06-01

    Gaining accuracy with indoor positioning of individuals is important as many location-based services rely on the user's current position to provide them with useful services. Many researchers have studied indoor positioning techniques based on WiFi and Bluetooth. However, they have disadvantages such as low accuracy or high cost. In this paper, we propose an indoor positioning system in which visible light radiated from light-emitting diodes is used to locate the position of receivers. Compared with existing methods using light-emitting diode light, we present a high-precision and simple implementation collaborative indoor visible light positioning system based on an improved spring model. We first estimate coordinate position information using the visible light positioning system, and then use the spring model to correct positioning errors. The system can be employed easily because it does not require additional sensors and the occlusion problem of visible light would be alleviated. We also describe simulation experiments, which confirm the feasibility of our proposed method.

  4. Significant improvement of accuracy and precision in the determination of trace rare earths by fluorescence analysis

    International Nuclear Information System (INIS)

    Ozawa, L.; Hersh, H.N.

    1976-01-01

    Most of the rare earths in yttrium, gadolinium and lanthanum oxides emit characteristic fluorescent line spectra under irradiation with photons, electrons and x rays. The sensitivity and selectivity of the rare earth fluorescences are high enough to determine the trace amounts (0.01 to 100 ppM) of rare earths. The absolute fluorescent intensities of solids, however, are markedly affected by the synthesis procedure, level of contamination and crystal perfection, resulting in poor accuracy and low precision for the method (larger than 50 percent error). Special care in preparation of the samples is required to obtain good accuracy and precision. It is found that the accuracy and precision for the determination of trace (less than 10 ppM) rare earths by fluorescence analysis improved significantly, while still maintaining the sensitivity, when the determination is made by comparing the ratio of the fluorescent intensities of the trace rare earths to that of a deliberately added rare earth as reference. The variation in the absolute fluorescent intensity remains, but is compensated for by measuring the fluorescent line intensity ratio. Consequently, the determination of trace rare earths (with less than 3 percent error) is easily made by a photoluminescence technique in which the rare earths are excited directly by photons. Accuracy is still maintained when the absolute fluorescent intensity is reduced by 50 percent through contamination by Ni, Fe, Mn or Pb (about 100 ppM). Determination accuracy is also improved for fluorescence analysis by electron excitation and x-ray excitation. For some rare earths, however, accuracy by these techniques is reduced because indirect excitation mechanisms are involved. The excitation mechanisms and the interferences between rare earths are also reported

  5. Combining Ground-Truthing and Technology to Improve Accuracy in Establishing Children's Food Purchasing Behaviors.

    Science.gov (United States)

    Coakley, Hannah Lee; Steeves, Elizabeth Anderson; Jones-Smith, Jessica C; Hopkins, Laura; Braunstein, Nadine; Mui, Yeeli; Gittelsohn, Joel

    Developing nutrition-focused environmental interventions for youth requires accurate assessment of where they purchase food. We have developed an innovative, technology-based method to improve the accuracy of food source recall among children using a tablet PC and ground-truthing methodologies. As part of the B'more Healthy Communties for Kids study, we mapped and digitally photographed every food source within a half-mile radius of 14 Baltimore City recreation centers. This food source database was then used with children from the surrounding neighborhoods to search for and identify the food sources they frequent. This novel integration of traditional data collection and technology enables researchers to gather highly accurate information on food source usage among children in Baltimore City. Funding is provided by the NICHD U-54 Grant #1U54HD070725-02.

  6. Improvement of Measurement Accuracy of Coolant Flow in a Test Loop

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jintae; Kim, Jong-Bum; Joung, Chang-Young; Ahn, Sung-Ho; Heo, Sung-Ho; Jang, Seoyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this study, to improve the measurement accuracy of coolant flow in a coolant flow simulator, elimination of external noise are enhanced by adding ground pattern in the control panel and earth around signal cables. In addition, a heating unit is added to strengthen the fluctuation signal by heating the coolant because the source of signals are heat energy. Experimental results using the improved system shows good agreement with the reference flow rate. The measurement error is reduced dramatically compared with the previous measurement accuracy and it will help to analyze the performance of nuclear fuels. For further works, out of pile test will be carried out by fabricating a test rig mockup and inspect the feasibility of the developed system. To verify the performance of a newly developed nuclear fuel, irradiation test needs to be carried out in the research reactor and measure the irradiation behavior such as fuel temperature, fission gas release, neutron dose, coolant temperature, and coolant flow rate. In particular, the heat generation rate of nuclear fuels can be measured indirectly by measuring temperature variation of coolant which passes by the fuel rod and its flow rate. However, it is very difficult to measure the flow rate of coolant at the fuel rod owing to the narrow gap between components of the test rig. In nuclear fields, noise analysis using thermocouples in the test rig has been applied to measure the flow velocity of coolant which circulates through the test loop.

  7. Improving calibration accuracy in gel dosimetry

    International Nuclear Information System (INIS)

    Oldham, M.; McJury, M.; Webb, S.; Baustert, I.B.; Leach, M.O.

    1998-01-01

    A new method of calibrating gel dosimeters (applicable to both Fricke and polyacrylamide gels) is presented which has intrinsically higher accuracy than current methods, and requires less gel. Two test-tubes of gel (inner diameter 2.5 cm, length 20 cm) are irradiated separately with a 10x10cm 2 field end-on in a water bath, such that the characteristic depth-dose curve is recorded in the gel. The calibration is then determined by fitting the depth-dose measured in water, against the measured change in relaxivity with depth in the gel. Increased accuracy is achieved in this simple depth-dose geometry by averaging the relaxivity at each depth. A large number of calibration data points, each with relatively high accuracy, are obtained. Calibration data over the full range of dose (1.6-10 Gy) is obtained by irradiating one test-tube to 10 Gy at dose maximum (D max ), and the other to 4.5 Gy at D max . The new calibration method is compared with a 'standard method' where five identical test-tubes of gel were irradiated to different known doses between 2 and 10 Gy. The percentage uncertainties in the slope and intercept of the calibration fit are found to be lower with the new method by a factor of about 4 and 10 respectively, when compared with the standard method and with published values. The gel was found to respond linearly within the error bars up to doses of 7 Gy, with a slope of 0.233±0.001 s -1 Gy -1 and an intercept of 1.106±0.005 Gy. For higher doses, nonlinear behaviour was observed. (author)

  8. M-Health for Improving Screening Accuracy of Acute Malnutrition in a Community-Based Management of Acute Malnutrition Program in Mumbai Informal Settlements.

    Science.gov (United States)

    Chanani, Sheila; Wacksman, Jeremy; Deshmukh, Devika; Pantvaidya, Shanti; Fernandez, Armida; Jayaraman, Anuja

    2016-12-01

    Acute malnutrition is linked to child mortality and morbidity. Community-Based Management of Acute Malnutrition (CMAM) programs can be instrumental in large-scale detection and treatment of undernutrition. The World Health Organization (WHO) 2006 weight-for-height/length tables are diagnostic tools available to screen for acute malnutrition. Frontline workers (FWs) in a CMAM program in Dharavi, Mumbai, were using CommCare, a mobile application, for monitoring and case management of children in combination with the paper-based WHO simplified tables. A strategy was undertaken to digitize the WHO tables into the CommCare application. To measure differences in diagnostic accuracy in community-based screening for acute malnutrition, by FWs, using a mobile-based solution. Twenty-seven FWs initially used the paper-based tables and then switched to an updated mobile application that included a nutritional grade calculator. Human error rates specifically associated with grade classification were calculated by comparison of the grade assigned by the FW to the grade each child should have received based on the same WHO tables. Cohen kappa coefficient, sensitivity and specificity rates were also calculated and compared for paper-based grade assignments and calculator grade assignments. Comparing FWs (N = 14) who completed at least 40 screenings without and 40 with the calculator, the error rates were 5.5% and 0.7%, respectively (p .90), from .79 to .97, after switching to the mobile calculator. Sensitivity and specificity also improved significantly. The mobile calculator significantly reduces an important component of human error in using the WHO tables to assess acute malnutrition at the community level. © The Author(s) 2016.

  9. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  10. Comparison of Accuracy of Contrast Enhanced Computed Tomography with Accuracy of Non-Contrast Magnetic Resonance Imaging in Evaluation of Local Extension of Base of Tongue Malignancies

    Directory of Open Access Journals (Sweden)

    Ketan Rathod

    2018-01-01

    Full Text Available Diagnosis of base of tongue malignancy can be obtained through clinical examination and biopsy. Magnetic Resonance Imaging (MRI and Computed Tomography (CT are used to detect its local extension, nodal spread and distant metastases. The main aim of study was to compare the accuracy of MRI and contrast enhanced CT in determining the local extent of base of tongue malignancy. Twenty five patients, biopsy proven cases of squamous cell carcinoma of base of tongue were taken. 1.5 Tesla Magnetic Resonance Unit with T2 weighted axial, coronal image; T1 weighted axial, coronal image; and STIR (Short tau inversion recovery axial and coronal images were used. 16 slice Computed Tomography unit with non-contrast and contrast enhanced images were used. Accuracy of CT to detect midline crossing: 50%; accuracy of MRI to detect midline crossing: 100%; accuracy of CT to detect anterior extension: 92%; accuracy of MRI to detect anterior extension: 100%; accuracy of CT to detect tonsillar fossa invasion: 83%; accuracy of MRI to detect tonsillar fossa invasion: 100%; accuracy of CT to detect oro pharyngeal spread: 83%; accuracy of MRI to detect oro pharyngeal spread: 100%; accuracy of CT to detect bone involvement: 20%; accuracy of MRI to detect bone involvement: 100%. MRI proved to be a better investigation than CT, in terms of evaluation of depth of invasion, presence of bony involvement, extension to opposite side, anterior half of tongue, tonsillar fossa, floor of mouth or oropharynx.

  11. Sampling strategies for improving tree accuracy and phylogenetic analyses: a case study in ciliate protists, with notes on the genus Paramecium.

    Science.gov (United States)

    Yi, Zhenzhen; Strüder-Kypke, Michaela; Hu, Xiaozhong; Lin, Xiaofeng; Song, Weibo

    2014-02-01

    In order to assess how dataset-selection for multi-gene analyses affects the accuracy of inferred phylogenetic trees in ciliates, we chose five genes and the genus Paramecium, one of the most widely used model protist genera, and compared tree topologies of the single- and multi-gene analyses. Our empirical study shows that: (1) Using multiple genes improves phylogenetic accuracy, even when their one-gene topologies are in conflict with each other. (2) The impact of missing data on phylogenetic accuracy is ambiguous: resolution power and topological similarity, but not number of represented taxa, are the most important criteria of a dataset for inclusion in concatenated analyses. (3) As an example, we tested the three classification models of the genus Paramecium with a multi-gene based approach, and only the monophyly of the subgenus Paramecium is supported. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Implementing an ultrasound-based protocol for diagnosingappendicitis while maintaining diagnostic accuracy

    International Nuclear Information System (INIS)

    Van Atta, Angela J.; Baskin, Henry J.; Maves, Connie K.; Dansie, David M.; Rollins, Michael D.; Bolte, Robert G.; Mundorff, Michael B.; Andrews, Seth P.

    2015-01-01

    The use of ultrasound to diagnose appendicitis in children is well-documented but not universally employed outside of pediatric academic centers, especially in the United States. Various obstacles make it difficult for institutions and radiologists to abandon a successful and accurate CT-based imaging protocol in favor of a US-based protocol. To describe how we overcame barriers to implementing a US-based appendicitis protocol among a large group of nonacademic private-practice pediatric radiologists while maintaining diagnostic accuracy and decreasing medical costs. A multidisciplinary team of physicians (pediatric surgery, pediatric emergency medicine and pediatric radiology) approved an imaging protocol using US as the primary modality to evaluate suspected appendicitis with CT for equivocal cases. The protocol addressed potential bias against US and accommodated for institutional limitations of radiologist and sonographer experience and availability. Radiologists coded US reports according to the probability of appendicitis. Radiology reports were compared with clinical outcomes to assess diagnostic accuracy. During the study period, physicians from each group were apprised of the interim US protocol accuracy results. Problematic cases were discussed openly. A total of 512 children were enrolled and underwent US for evaluation of appendicitis over a 30-month period. Diagnostic accuracy was comparable to published results for combined US/CT protocols. Comparing the first 12 months to the last 12 months of the study period, the proportion of children achieving an unequivocal US result increased from 30% (51/169) to 53% (149/282) and the proportion of children undergoing surgery based solely on US findings increased from 55% (23/42) to 84% (92/109). Overall, 63% (325/512) of patients in the protocol did not require a CT. Total patient costs were reduced by $30,182 annually. We overcame several barriers to implementing a US protocol. During the study period our

  13. Implementing an ultrasound-based protocol for diagnosingappendicitis while maintaining diagnostic accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Van Atta, Angela J. [University of Utah School of Medicine, Salt Lake City, UT (United States); Baskin, Henry J.; Maves, Connie K.; Dansie, David M. [Primary Children' s Hospital, Department of Radiology, Salt Lake City, UT (United States); Rollins, Michael D. [University of Utah School of Medicine, Department of Surgery, Division of Pediatric Surgery, Salt Lake City, UT (United States); Bolte, Robert G. [University of Utah School of Medicine, Department of Pediatrics, Division of Pediatric Emergency Medicine, Salt Lake City, UT (United States); Mundorff, Michael B.; Andrews, Seth P. [Primary Children' s Hospital, Systems Improvement, Salt Lake City, UT (United States)

    2015-05-01

    The use of ultrasound to diagnose appendicitis in children is well-documented but not universally employed outside of pediatric academic centers, especially in the United States. Various obstacles make it difficult for institutions and radiologists to abandon a successful and accurate CT-based imaging protocol in favor of a US-based protocol. To describe how we overcame barriers to implementing a US-based appendicitis protocol among a large group of nonacademic private-practice pediatric radiologists while maintaining diagnostic accuracy and decreasing medical costs. A multidisciplinary team of physicians (pediatric surgery, pediatric emergency medicine and pediatric radiology) approved an imaging protocol using US as the primary modality to evaluate suspected appendicitis with CT for equivocal cases. The protocol addressed potential bias against US and accommodated for institutional limitations of radiologist and sonographer experience and availability. Radiologists coded US reports according to the probability of appendicitis. Radiology reports were compared with clinical outcomes to assess diagnostic accuracy. During the study period, physicians from each group were apprised of the interim US protocol accuracy results. Problematic cases were discussed openly. A total of 512 children were enrolled and underwent US for evaluation of appendicitis over a 30-month period. Diagnostic accuracy was comparable to published results for combined US/CT protocols. Comparing the first 12 months to the last 12 months of the study period, the proportion of children achieving an unequivocal US result increased from 30% (51/169) to 53% (149/282) and the proportion of children undergoing surgery based solely on US findings increased from 55% (23/42) to 84% (92/109). Overall, 63% (325/512) of patients in the protocol did not require a CT. Total patient costs were reduced by $30,182 annually. We overcame several barriers to implementing a US protocol. During the study period our

  14. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  15. Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.

    Science.gov (United States)

    Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A

    2015-02-01

    This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

  16. Improving the Stability and Accuracy of Power Hardware-in-the-Loop Simulation Using Virtual Impedance Method

    Directory of Open Access Journals (Sweden)

    Xiaoming Zha

    2016-11-01

    Full Text Available Power hardware-in-the-loop (PHIL systems are advanced, real-time platforms for combined software and hardware testing. Two paramount issues in PHIL simulations are the closed-loop stability and simulation accuracy. This paper presents a virtual impedance (VI method for PHIL simulations that improves the simulation’s stability and accuracy. Through the establishment of an impedance model for a PHIL simulation circuit, which is composed of a voltage-source converter and a simple network, the stability and accuracy of the PHIL system are analyzed. Then, the proposed VI method is implemented in a digital real-time simulator and used to correct the combined impedance in the impedance model, achieving higher stability and accuracy of the results. The validity of the VI method is verified through the PHIL simulation of two typical PHIL examples.

  17. Neuronavigation accuracy dependence on CT and MR imaging parameters: a phantom-based study

    International Nuclear Information System (INIS)

    Poggi, S; Pallotta, S; Russo, S; Gallina, P; Torresin, A; Bucciolini, M

    2003-01-01

    Clinical benefits from neuronavigation are well established. However, the complexity of its technical environment requires a careful evaluation of different types of errors. In this work, a detailed phantom study which investigates the accuracy in a neuronavigation procedure is presented. The dependence on many different imaging parameters, such as field of view, slice thickness and different kind of sequences (sequential and spiral for CT, T1-weighted and T2-weighted for MRI), is quantified. Moreover, data based on CT images are compared to those based on MR images, taking into account MRI distortion. Finally, the contributions to global accuracy coming from image acquisition, registration and navigation itself are discussed. Results demonstrate the importance of imaging accuracy. Procedures based on CT proved to be more accurate than procedures based on MRI. In the former, values from 2 to 2.5 mm are obtained for 95% fractiles of cumulative distribution of Euclidean distances between the intended target and the reached one while, in the latter, the measured values range from 3 to 4 mm. The absence of imaging distortion proved to be crucial for registration accuracy in MR-based procedures

  18. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    Science.gov (United States)

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  19. Transthoracic CT-guided biopsy with multiplanar reconstruction image improves diagnostic accuracy of solitary pulmonary nodules

    International Nuclear Information System (INIS)

    Ohno, Yoshiharu; Hatabu, Hiroto; Takenaka, Daisuke; Imai, Masatake; Ohbayashi, Chiho; Sugimura, Kazuro

    2004-01-01

    Objective: To evaluate the utility of multiplanar reconstruction (MPR) image for CT-guided biopsy and determine factors of influencing diagnostic accuracy and the pneumothorax rate. Materials and methods: 390 patients with 396 pulmonary nodules underwent transthoracic CT-guided aspiration biopsy (TNAB) and transthoracic CT-guided cutting needle core biopsy (TCNB) as follows: 250 solitary pulmonary nodules (SPNs) underwent conventional CT-guided biopsy (conventional method), 81 underwent CT-fluoroscopic biopsy (CT-fluoroscopic method) and 65 underwent conventional CT-guided biopsy in combination with MPR image (MPR method). Success rate, overall diagnostic accuracy, pneumothorax rate and total procedure time were compared in each method. Factors affecting diagnostic accuracy and pneumothorax rate of CT-guided biopsy were statistically evaluated. Results: Success rates (TNAB: 100.0%, TCNB: 100.0%) and overall diagnostic accuracies (TNAB: 96.9%, TCNB: 97.0%) of MPR were significantly higher than those using the conventional method (TNAB: 87.6 and 82.4%, TCNB: 86.3 and 81.3%) (P<0.05). Diagnostic accuracy were influenced by biopsy method, lesion size, and needle path length (P<0.05). Pneumothorax rate was influenced by pathological diagnostic method, lesion size, number of punctures and FEV1.0% (P<0.05). Conclusion: The use of MPR for CT-guided lung biopsy is useful for improving diagnostic accuracy with no significant increase in pneumothorax rate or total procedure time

  20. Accuracy Limitations in Optical Linear Algebra Processors

    Science.gov (United States)

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.

  1. Smart Device-Supported BDS/GNSS Real-Time Kinematic Positioning for Sub-Meter-Level Accuracy in Urban Location-Based Services.

    Science.gov (United States)

    Wang, Liang; Li, Zishen; Zhao, Jiaojiao; Zhou, Kai; Wang, Zhiyu; Yuan, Hong

    2016-12-21

    Using mobile smart devices to provide urban location-based services (LBS) with sub-meter-level accuracy (around 0.5 m) is a major application field for future global navigation satellite system (GNSS) development. Real-time kinematic (RTK) positioning, which is a widely used GNSS-based positioning approach, can improve the accuracy from about 10-20 m (achieved by the standard positioning services) to about 3-5 cm based on the geodetic receivers. In using the smart devices to achieve positioning with sub-meter-level accuracy, a feasible solution of combining the low-cost GNSS module and the smart device is proposed in this work and a user-side GNSS RTK positioning software was developed from scratch based on the Android platform. Its real-time positioning performance was validated by BeiDou Navigation Satellite System/Global Positioning System (BDS/GPS) combined RTK positioning under the conditions of a static and kinematic (the velocity of the rover was 50-80 km/h) mode in a real urban environment with a SAMSUNG Galaxy A7 smartphone. The results show that the fixed-rates of ambiguity resolution (the proportion of epochs of ambiguities fixed) for BDS/GPS combined RTK in the static and kinematic tests were about 97% and 90%, respectively, and the average positioning accuracies (RMS) were better than 0.15 m (horizontal) and 0.25 m (vertical) for the static test, and 0.30 m (horizontal) and 0.45 m (vertical) for the kinematic test.

  2. The Accuracy of Cognitive Monitoring during Computer-Based Instruction.

    Science.gov (United States)

    Garhart, Casey; Hannafin, Michael J.

    This study was conducted to determine the accuracy of learners' comprehension monitoring during computer-based instruction and to assess the relationship between enroute monitoring and different levels of learning. Participants were 50 university undergraduate students enrolled in an introductory educational psychology class. All students received…

  3. Collaboration between radiological technologists (radiographers) and junior doctors during image interpretation improves the accuracy of diagnostic decisions

    International Nuclear Information System (INIS)

    Kelly, B.S.; Rainford, L.A.; Gray, J.; McEntee, M.F.

    2012-01-01

    Rationale and Objectives: In Emergency Departments (ED) junior doctors regularly make diagnostic decisions based on radiographic images. This study investigates whether collaboration between junior doctors and radiographers impacts on diagnostic accuracy. Materials and Methods: Research was carried out in the ED of a university teaching hospital and included 10 pairs of participants. Radiographers and junior doctors were shown 42 wrist radiographs and 40 CT Brains and were asked for their level of confidence of the presence or absence of distal radius fractures or fresh intracranial bleeds respectively using ViewDEX software, first working alone and then in pairs. Receiver Operating Characteristic was used to analyze performance. Results were compared using one-way analysis of variance. Results: The results showed statistically significant improvements in the Area Under the Curve (AUC) of the junior doctors when working with the radiographers for both sets of images (wrist and CT) treated as random readers and cases (p ≤ 0.008 and p ≤ 0.0026 respectively). While the radiographers’ results saw no significant changes, their mean Az values did show an increasing trend when working in collaboration. Conclusion: Improvement in performance of junior doctors following collaboration strongly suggests changes in the potential to improve accuracy of patient diagnosis and therefore patient care. Further training for junior doctors in the interpretation of diagnostic images should also be considered. Decision making of junior doctors was positively impacted on after introducing the opinion of a radiographer. Collaboration exceeds the sum of the parts; the two professions are better together.

  4. Role of Effective Density in improving the accuracy of bottom pressure based sea level measurements - A case study from Gulf of Guinea

    Digital Repository Service at National Institute of Oceanography (India)

    Joseph, A.; Mehra, P.; Desai, R.G.P.; Dotse, J.; Odametey, J.T.; Nkebi, E.K.; Vijaykumar, K.; Prabhudesai, S.P.

    of the sea water, as a result of turbulence induced by wind forcing and impact of rain drops, is suspected to be responsible for the observed effective reduction in the in situ density of this shallow water body The accuracy of bottom-pressure based sea level...

  5. A high accuracy land use/cover retrieval system

    Directory of Open Access Journals (Sweden)

    Alaa Hefnawy

    2012-03-01

    Full Text Available The effects of spatial resolution on the accuracy of mapping land use/cover types have received increasing attention as a large number of multi-scale earth observation data become available. Although many methods of semi automated image classification of remotely sensed data have been established for improving the accuracy of land use/cover classification during the past 40 years, most of them were employed in single-resolution image classification, which led to unsatisfactory results. In this paper, we propose a multi-resolution fast adaptive content-based retrieval system of satellite images. Through our proposed system, we apply a Super Resolution technique for the Landsat-TM images to have a high resolution dataset. The human–computer interactive system is based on modified radial basis function for retrieval of satellite database images. We apply the backpropagation supervised artificial neural network classifier for both the multi and single resolution datasets. The results show significant improved land use/cover classification accuracy for the multi-resolution approach compared with those from single-resolution approach.

  6. Analysis of prostate cancer localization toward improved diagnostic accuracy of transperineal prostate biopsy

    Directory of Open Access Journals (Sweden)

    Yoshiro Sakamoto

    2014-09-01

    Conclusions: The concordance of prostate cancer between prostatectomy specimens and biopsies is comparatively favorable. According to our study, the diagnostic accuracy of transperineal prostate biopsy can be improved in our institute by including the anterior portion of the Apex-Mid and Mid regions in the 12-core biopsy or 16-core biopsy, such that a 4-core biopsy of the anterior portion is included.

  7. How could the replica method improve accuracy of performance assessment of channel coding?

    Energy Technology Data Exchange (ETDEWEB)

    Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of technology, Yokohama 226-8502 (Japan)], E-mail: kaba@dis.titech.ac.jp

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  8. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    International Nuclear Information System (INIS)

    D’Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-01-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  9. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Energy Technology Data Exchange (ETDEWEB)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it; Natale, Emanuela, E-mail: emanuela.natale@univaq.it [University of L’Aquila, Department of Industrial and Information Engineering and Economics (DIIIE), via G. Gronchi, 18, 67100 L’Aquila (Italy)

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  10. Image Positioning Accuracy Analysis for Super Low Altitude Remote Sensing Satellites

    Directory of Open Access Journals (Sweden)

    Ming Xu

    2012-10-01

    Full Text Available Super low altitude remote sensing satellites maintain lower flight altitudes by means of ion propulsion in order to improve image resolution and positioning accuracy. The use of engineering data in design for achieving image positioning accuracy is discussed in this paper based on the principles of the photogrammetry theory. The exact line-of-sight rebuilding of each detection element and this direction precisely intersecting with the Earth's elliptical when the camera on the satellite is imaging are both ensured by the combined design of key parameters. These parameters include: orbit determination accuracy, attitude determination accuracy, camera exposure time, accurately synchronizing the reception of ephemeris with attitude data, geometric calibration and precise orbit verification. Precise simulation calculations show that image positioning accuracy of super low altitude remote sensing satellites is not obviously improved. The attitude determination error of a satellite still restricts its positioning accuracy.

  11. Establishment of the laser induced breakdown spectroscopy in a vacuum atmosphere for a accuracy improvement

    International Nuclear Information System (INIS)

    Kim, Seung Hyun; Kim, H. D.; Shin, H. S.

    2009-06-01

    This report describes the fundamentals of the Laser Induced Breakdown Spectroscopy(LIBS), and it describes the quantitative analysis method in the vacuum condition to obtain a high measurement accuracy. The LIBS system employs the following major components: a pulsed laser, a gas chamber, an emission spectrometer, a detector, and a computer. When the output from a pulsed laser is focused onto a small spot on a sample, an optically induced plasma, called a laser-induced plasma (LIP) is formed at the surface. The LIBS is a laser-based sensitive optical technique used to detect certain atomic and molecular species by monitoring the emission signals from a LIP. This report was described a fundamentals of the LIBS and current states of research. And, It was described a optimization of measurement condition and characteristic analysis of a LIP by measurement of the fundamental metals. The LIBS system shows about a 0.63 ∼ 5.82% measurement errors and calibration curve for the 'Cu, Cr and Ni'. It also shows about a 5% less of a measurement errors and calibration curve for a Nd and Sm. As a result, the LIBS accuracy for a part was little improved than preexistence by the optimized condition

  12. High-accuracy resolver-to-digital conversion via phase locked loop based on PID controller

    Science.gov (United States)

    Li, Yaoling; Wu, Zhong

    2018-03-01

    The problem of resolver-to-digital conversion (RDC) is transformed into the problem of angle tracking control, and a phase locked loop (PLL) method based on PID controller is proposed in this paper. This controller comprises a typical PI controller plus an incomplete differential which can avoid the amplification of higher-frequency noise components by filtering the phase detection error with a low-pass filter. Compared with conventional ones, the proposed PLL method makes the converter a system of type III and thus the conversion accuracy can be improved. Experimental results demonstrate the effectiveness of the proposed method.

  13. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  14. An efficient optimization method to improve the measuring accuracy of oxygen saturation by using triangular wave optical signal

    Science.gov (United States)

    Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling

    2017-09-01

    The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.

  15. Accuracy optimization with wavelength tunability in overlay imaging technology

    Science.gov (United States)

    Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna

    2018-03-01

    As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.

  16. Best Practices for Mudweight Window Generation and Accuracy Assessment between Seismic Based Pore Pressure Prediction Methodologies for a Near-Salt Field in Mississippi Canyon, Gulf of Mexico

    Science.gov (United States)

    Mannon, Timothy Patrick, Jr.

    Improving well design has and always will be the primary goal in drilling operations in the oil and gas industry. Oil and gas plays are continuing to move into increasingly hostile drilling environments, including near and/or sub-salt proximities. The ability to reduce the risk and uncertainly involved in drilling operations in unconventional geologic settings starts with improving the techniques for mudweight window modeling. To address this issue, an analysis of wellbore stability and well design improvement has been conducted. This study will show a systematic approach to well design by focusing on best practices for mudweight window projection for a field in Mississippi Canyon, Gulf of Mexico. The field includes depleted reservoirs and is in close proximity of salt intrusions. Analysis of offset wells has been conducted in the interest of developing an accurate picture of the subsurface environment by making connections between depth, non-productive time (NPT) events, and mudweights used. Commonly practiced petrophysical methods of pore pressure, fracture pressure, and shear failure gradient prediction have been applied to key offset wells in order to enhance the well design for two proposed wells. For the first time in the literature, the accuracy of the commonly accepted, seismic interval velocity based and the relatively new, seismic frequency based methodologies for pore pressure prediction are qualitatively and quantitatively compared for accuracy. Accuracy standards will be based on the agreement of the seismic outputs to pressure data obtained while drilling and petrophysically based pore pressure outputs for each well. The results will show significantly higher accuracy for the seismic frequency based approach in wells that were in near/sub-salt environments and higher overall accuracy for all of the wells in the study as a whole.

  17. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2016-01-15

    Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarsely sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the

  18. Alpha neurofeedback training improves SSVEP-based BCI performance

    Science.gov (United States)

    Wan, Feng; Nuno da Cruz, Janir; Nan, Wenya; Wong, Chi Man; Vai, Mang I.; Rosa, Agostinho

    2016-06-01

    Objective. Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) can provide relatively easy, reliable and high speed communication. However, the performance is still not satisfactory, especially in some users who are not able to generate strong enough SSVEP signals. This work aims to strengthen a user’s SSVEP by alpha down-regulating neurofeedback training (NFT) and consequently improve the performance of the user in using SSVEP-based BCIs. Approach. An experiment with two steps was designed and conducted. The first step was to investigate the relationship between the resting alpha activity and the SSVEP-based BCI performance, in order to determine the training parameter for the NFT. Then in the second step, half of the subjects with ‘low’ performance (i.e. BCI classification accuracy <80%) were randomly assigned to a NFT group to perform a real-time NFT, and the rest half to a non-NFT control group for comparison. Main results. The first step revealed a significant negative correlation between the BCI performance and the individual alpha band (IAB) amplitudes in the eyes-open resting condition in a total of 33 subjects. In the second step, it was found that during the IAB down-regulating NFT, on average the subjects were able to successfully decrease their IAB amplitude over training sessions. More importantly, the NFT group showed an average increase of 16.5% in the SSVEP signal SNR (signal-to-noise ratio) and an average increase of 20.3% in the BCI classification accuracy, which was significant compared to the non-NFT control group. Significance. These findings indicate that the alpha down-regulating NFT can be used to improve the SSVEP signal quality and the subjects’ performance in using SSVEP-based BCIs. It could be helpful to the SSVEP related studies and would contribute to more effective SSVEP-based BCI applications.

  19. The contribution of educational class in improving accuracy of cardiovascular risk prediction across European regions

    DEFF Research Database (Denmark)

    Ferrario, Marco M; Veronesi, Giovanni; Chambless, Lloyd E

    2014-01-01

    OBJECTIVE: To assess whether educational class, an index of socioeconomic position, improves the accuracy of the SCORE cardiovascular disease (CVD) risk prediction equation. METHODS: In a pooled analysis of 68 455 40-64-year-old men and women, free from coronary heart disease at baseline, from 47...

  20. A multi-view face recognition system based on cascade face detector and improved Dlib

    Science.gov (United States)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  1. Smart Device-Supported BDS/GNSS Real-Time Kinematic Positioning for Sub-Meter-Level Accuracy in Urban Location-Based Services

    Directory of Open Access Journals (Sweden)

    Liang Wang

    2016-12-01

    Full Text Available Using mobile smart devices to provide urban location-based services (LBS with sub-meter-level accuracy (around 0.5 m is a major application field for future global navigation satellite system (GNSS development. Real-time kinematic (RTK positioning, which is a widely used GNSS-based positioning approach, can improve the accuracy from about 10–20 m (achieved by the standard positioning services to about 3–5 cm based on the geodetic receivers. In using the smart devices to achieve positioning with sub-meter-level accuracy, a feasible solution of combining the low-cost GNSS module and the smart device is proposed in this work and a user-side GNSS RTK positioning software was developed from scratch based on the Android platform. Its real-time positioning performance was validated by BeiDou Navigation Satellite System/Global Positioning System (BDS/GPS combined RTK positioning under the conditions of a static and kinematic (the velocity of the rover was 50–80 km/h mode in a real urban environment with a SAMSUNG Galaxy A7 smartphone. The results show that the fixed-rates of ambiguity resolution (the proportion of epochs of ambiguities fixed for BDS/GPS combined RTK in the static and kinematic tests were about 97% and 90%, respectively, and the average positioning accuracies (RMS were better than 0.15 m (horizontal and 0.25 m (vertical for the static test, and 0.30 m (horizontal and 0.45 m (vertical for the kinematic test.

  2. High accuracy digital aging monitor based on PLL-VCO circuit

    International Nuclear Information System (INIS)

    Zhang Yuejun; Jiang Zhidi; Wang Pengjun; Zhang Xuelong

    2015-01-01

    As the manufacturing process is scaled down to the nanoscale, the aging phenomenon significantly affects the reliability and lifetime of integrated circuits. Consequently, the precise measurement of digital CMOS aging is a key aspect of nanoscale aging tolerant circuit design. This paper proposes a high accuracy digital aging monitor using phase-locked loop and voltage-controlled oscillator (PLL-VCO) circuit. The proposed monitor eliminates the circuit self-aging effect for the characteristic of PLL, whose frequency has no relationship with circuit aging phenomenon. The PLL-VCO monitor is implemented in TSMC low power 65 nm CMOS technology, and its area occupies 303.28 × 298.94 μm 2 . After accelerating aging tests, the experimental results show that PLL-VCO monitor improves accuracy about high temperature by 2.4% and high voltage by 18.7%. (semiconductor integrated circuits)

  3. Improving the accuracy of ultrafast ligand-based screening: incorporating lipophilicity into ElectroShape as an extra dimension.

    Science.gov (United States)

    Armstrong, M Stuart; Finn, Paul W; Morris, Garrett M; Richards, W Graham

    2011-08-01

    In a previous paper, we presented the ElectroShape method, which we used to achieve successful ligand-based virtual screening. It extended classical shape-based methods by applying them to the four-dimensional shape of the molecule where partial charge was used as the fourth dimension to capture electrostatic information. This paper extends the approach by using atomic lipophilicity (alogP) as an additional molecular property and validates it using the improved release 2 of the Directory of Useful Decoys (DUD). When alogP replaced partial charge, the enrichment results were slightly below those of ElectroShape, though still far better than purely shape-based methods. However, when alogP was added as a complement to partial charge, the resulting five-dimensional enrichments shows a clear improvement in performance. This demonstrates the utility of extending the ElectroShape virtual screening method by adding other atom-based descriptors.

  4. Improving the accuracy of self-assessment of practical clinical skills using video feedback--the importance of including benchmarks.

    Science.gov (United States)

    Hawkins, S C; Osborne, A; Schofield, S J; Pournaras, D J; Chester, J F

    2012-01-01

    Isolated video recording has not been demonstrated to improve self-assessment accuracy. This study examines if the inclusion of a defined standard benchmark performance in association with video feedback of a student's own performance improves the accuracy of student self-assessment of clinical skills. Final year medical students were video recorded performing a standardised suturing task in a simulated environment. After the exercise, the students self-assessed their performance using global rating scales (GRSs). An identical self-assessment process was repeated following video review of their performance. Students were then shown a video-recorded 'benchmark performance', which was specifically developed for the study. This demonstrated the competency levels required to score full marks (30 points). A further self-assessment task was then completed. Students' scores were correlated against expert assessor scores. A total of 31 final year medical students participated. Student self-assessment scores before video feedback demonstrated moderate positive correlation with expert assessor scores (r = 0.48, p benchmark performance demonstration, self-assessment scores demonstrated a very strong positive correlation with expert scores (r = 0.83, p benchmark performance in combination with video feedback may significantly improve the accuracy of students' self-assessments.

  5. Improved GPS-based Satellite Relative Navigation Using Femtosecond Laser Relative Distance Measurements

    Directory of Open Access Journals (Sweden)

    Hyungjik Oh

    2016-03-01

    Full Text Available This study developed an approach for improving Carrier-phase Differential Global Positioning System (CDGPS based realtime satellite relative navigation by applying laser baseline measurement data. The robustness against the space operational environment was considered, and a Synthetic Wavelength Interferometer (SWI algorithm based on a femtosecond laser measurement model was developed. The phase differences between two laser wavelengths were combined to measure precise distance. Generated laser data were used to improve estimation accuracy for the float ambiguity of CDGPS data. Relative navigation simulations in real-time were performed using the extended Kalman filter algorithm. The GPS and laser-combined relative navigation accuracy was compared with GPS-only relative navigation solutions to determine the impact of laser data on relative navigation. In numerical simulations, the success rate of integer ambiguity resolution increased when laser data was added to GPS data. The relative navigational errors also improved five-fold and two-fold, relative to the GPS-only error, for 250 m and 5 km initial relative distances, respectively. The methodology developed in this study is suitable for application to future satellite formation-flying missions.

  6. Improving treatment planning accuracy through multimodality imaging

    International Nuclear Information System (INIS)

    Sailer, Scott L.; Rosenman, Julian G.; Soltys, Mitchel; Cullip, Tim J.; Chen, Jun

    1996-01-01

    the patient's initial fields and boost, respectively. Case illustrations are shown. Conclusions: We have successfully integrated multimodality imaging into our treatment-planning system, and its routine use is increasing. Multimodality imaging holds out the promise of improving treatment planning accuracy and, thus, takes maximum advantage of three dimensional treatment planning systems.

  7. Robotic excavator trajectory control using an improved GA based PID controller

    Science.gov (United States)

    Feng, Hao; Yin, Chen-Bo; Weng, Wen-wen; Ma, Wei; Zhou, Jun-jing; Jia, Wen-hua; Zhang, Zi-li

    2018-05-01

    In order to achieve excellent trajectory tracking performances, an improved genetic algorithm (IGA) is presented to search for the optimal proportional-integral-derivative (PID) controller parameters for the robotic excavator. Firstly, the mathematical model of kinematic and electro-hydraulic proportional control system of the excavator are analyzed based on the mechanism modeling method. On this basis, the actual model of the electro-hydraulic proportional system are established by the identification experiment. Furthermore, the population, the fitness function, the crossover probability and mutation probability of the SGA are improved: the initial PID parameters are calculated by the Ziegler-Nichols (Z-N) tuning method and the initial population is generated near it; the fitness function is transformed to maintain the diversity of the population; the probability of crossover and mutation are adjusted automatically to avoid premature convergence. Moreover, a simulation study is carried out to evaluate the time response performance of the proposed controller, i.e., IGA based PID against the SGA and Z-N based PID controllers with a step signal. It was shown from the simulation study that the proposed controller provides the least rise time and settling time of 1.23 s and 1.81 s, respectively against the other tested controllers. Finally, two types of trajectories are designed to validate the performances of the control algorithms, and experiments are performed on the excavator trajectory control experimental platform. It was demonstrated from the experimental work that the proposed IGA based PID controller improves the trajectory accuracy of the horizontal line and slope line trajectories by 23.98% and 23.64%, respectively in comparison to the SGA tuned PID controller. The results further indicate that the proposed IGA tuning based PID controller is effective for improving the tracking accuracy, which may be employed in the trajectory control of an actual excavator.

  8. Improvement in QA protocol for TLD based personnel monitoring laboratory in last five year

    International Nuclear Information System (INIS)

    Rakesh, R.B.

    2018-01-01

    The Quality Assurance (QA) in Personnel monitoring (PM) is a tool to assess the performance of PM laboratories and reliability of dose estimation with respect to standards laid down by international agencies such as IAEA (ISO trumpet curve), IEC, ANSI etc. Reliable personal dose estimation is a basic requirement for radiation protection planning as well as decision making continuous improvement in radiation protection is inherent in radiation protection practices which is highly dependent on accuracy and reliability of the monitoring data. Experience based evolution of Quality control (QC) measures as well as Quality assurance (QA) protocol are two important aspects towards continuous improvement in accuracy and reliability of personnel monitoring results. The paper describes improvement in QC measures and QA protocols initiated during the last five years which led to improvement in the quality of PM services

  9. MR staging accuracy for endometrial cancer based on the new FIGO stage

    International Nuclear Information System (INIS)

    Shin, Kyung Eun; Park, Byung Kwan; Kim, Chan Kyo; Bae, Duk Soo; Song, Sang Yong; Kim, Bohyun

    2011-01-01

    Background: Magnetic resonance imaging (MRI) has been frequently used to determine a preoperative treatment plan for gynecologic cancers. However, the MR accuracy for staging an endometrial cancer is not satisfactory based on the old FIGO staging system. Purpose: To evaluate MR accuracy for staging endometrial cancer using the new FIGO staging system. Material and Methods: Between January 2005 and May 2009, 199 women underwent surgery due to endometrial cancer. In each patient, an endometrial cancer was staged using MR findings based on the old FIGO staging system and then repeated according to the new FIGO staging system for comparison. Histopathologic findings were used as a standard of reference. Results: The accuracy of MRI in the staging of endometrial carcinoma stage I, II, III, and IV using the old FIGO staging system were 80% (159/199), 89% (178/199), 90% (179/199), and 99% (198/199), respectively, compared to 87% (174/199), 97% (193/199), 90% (179/199), and 99% (198/199), respectively, when using the new FIGO staging criteria. The overall MR accuracy of the old and new staging systems were 51% (101/199) and 81% (161/199), respectively. Conclusion: MRI has become a more useful tool in the preoperative staging of endometrial cancers using the new FIGO staging system compared to the old one with increased accuracy

  10. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  11. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran A Monaghan

    Full Text Available Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data

  12. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    Directory of Open Access Journals (Sweden)

    Kyuman Lee

    2016-08-01

    Full Text Available The airborne relay-based positioning system (ARPS, which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference

  13. Improving the surface metrology accuracy of optical profilers by using multiple measurements

    Science.gov (United States)

    Xu, Xudong; Huang, Qiushi; Shen, Zhengxiang; Wang, Zhanshan

    2016-10-01

    The performance of high-resolution optical systems is affected by small angle scattering at the mid-spatial-frequency irregularities of the optical surface. Characterizing these irregularities is, therefore, important. However, surface measurements obtained with optical profilers are influenced by additive white noise, as indicated by the heavy-tail effect observable on their power spectral density (PSD). A multiple-measurement method is used to reduce the effects of white noise by averaging individual measurements. The intensity of white noise is determined using a model based on the theoretical PSD of fractal surface measurements with additive white noise. The intensity of white noise decreases as the number of times of multiple measurements increases. Using multiple measurements also increases the highest observed spatial frequency; this increase is derived and calculated. Additionally, the accuracy obtained using multiple measurements is carefully studied, with the analysis of both the residual reference error after calibration, and the random errors appearing in the range of measured spatial frequencies. The resulting insights on the effects of white noise in optical profiler measurements and the methods to mitigate them may prove invaluable to improve the quality of surface metrology with optical profilers.

  14. Improving prediction accuracy of cooling load using EMD, PSR and RBFNN

    Science.gov (United States)

    Shen, Limin; Wen, Yuanmei; Li, Xiaohong

    2017-08-01

    To increase the accuracy for the prediction of cooling load demand, this work presents an EMD (empirical mode decomposition)-PSR (phase space reconstruction) based RBFNN (radial basis function neural networks) method. Firstly, analyzed the chaotic nature of the real cooling load demand, transformed the non-stationary cooling load historical data into several stationary intrinsic mode functions (IMFs) by using EMD. Secondly, compared the RBFNN prediction accuracies of each IMFs and proposed an IMF combining scheme that is combine the lower-frequency components (called IMF4-IMF6 combined) while keep the higher frequency component (IMF1, IMF2, IMF3) and the residual unchanged. Thirdly, reconstruct phase space for each combined components separately, process the highest frequency component (IMF1) by differential method and predict with RBFNN in the reconstructed phase spaces. Real cooling load data of a centralized ice storage cooling systems in Guangzhou are used for simulation. The results show that the proposed hybrid method outperforms the traditional methods.

  15. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    Science.gov (United States)

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  16. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study.

    Science.gov (United States)

    Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R

    2017-10-01

    To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.

  17. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    Science.gov (United States)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  18. Improving accuracy of protein-protein interaction prediction by considering the converse problem for sequence representation

    Directory of Open Access Journals (Sweden)

    Wang Yong

    2011-10-01

    Full Text Available Abstract Background With the development of genome-sequencing technologies, protein sequences are readily obtained by translating the measured mRNAs. Therefore predicting protein-protein interactions from the sequences is of great demand. The reason lies in the fact that identifying protein-protein interactions is becoming a bottleneck for eventually understanding the functions of proteins, especially for those organisms barely characterized. Although a few methods have been proposed, the converse problem, if the features used extract sufficient and unbiased information from protein sequences, is almost untouched. Results In this study, we interrogate this problem theoretically by an optimization scheme. Motivated by the theoretical investigation, we find novel encoding methods for both protein sequences and protein pairs. Our new methods exploit sufficiently the information of protein sequences and reduce artificial bias and computational cost. Thus, it significantly outperforms the available methods regarding sensitivity, specificity, precision, and recall with cross-validation evaluation and reaches ~80% and ~90% accuracy in Escherichia coli and Saccharomyces cerevisiae respectively. Our findings here hold important implication for other sequence-based prediction tasks because representation of biological sequence is always the first step in computational biology. Conclusions By considering the converse problem, we propose new representation methods for both protein sequences and protein pairs. The results show that our method significantly improves the accuracy of protein-protein interaction predictions.

  19. Effect of conductance linearity and multi-level cell characteristics of TaOx-based synapse device on pattern recognition accuracy of neuromorphic system

    Science.gov (United States)

    Sung, Changhyuck; Lim, Seokjae; Kim, Hyungjun; Kim, Taesu; Moon, Kibong; Song, Jeonghwan; Kim, Jae-Joon; Hwang, Hyunsang

    2018-03-01

    To improve the classification accuracy of an image data set (CIFAR-10) by using analog input voltage, synapse devices with excellent conductance linearity (CL) and multi-level cell (MLC) characteristics are required. We analyze the CL and MLC characteristics of TaOx-based filamentary resistive random access memory (RRAM) to implement the synapse device in neural network hardware. Our findings show that the number of oxygen vacancies in the filament constriction region of the RRAM directly controls the CL and MLC characteristics. By adopting a Ta electrode (instead of Ti) and the hot-forming step, we could form a dense conductive filament. As a result, a wide range of conductance levels with CL is achieved and significantly improved image classification accuracy is confirmed.

  20. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    Science.gov (United States)

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.

  1. An Improved Information Value Model Based on Gray Clustering for Landslide Susceptibility Mapping

    Directory of Open Access Journals (Sweden)

    Qianqian Ba

    2017-01-01

    Full Text Available Landslides, as geological hazards, cause significant casualties and economic losses. Therefore, it is necessary to identify areas prone to landslides for prevention work. This paper proposes an improved information value model based on gray clustering (IVM-GC for landslide susceptibility mapping. This method uses the information value derived from an information value model to achieve susceptibility classification and weight determination of landslide predisposing factors and, hence, obtain the landslide susceptibility of each study unit based on the clustering analysis. Using a landslide inventory of Chongqing, China, which contains 8435 landslides, three landslide susceptibility maps were generated based on the common information value model (IVM, an information value model improved by an analytic hierarchy process (IVM-AHP and our new improved model. Approximately 70% (5905 of the inventory landslides were used to generate the susceptibility maps, while the remaining 30% (2530 were used to validate the results. The training accuracies of the IVM, IVM-AHP and IVM-GC were 81.8%, 78.7% and 85.2%, respectively, and the prediction accuracies were 82.0%, 78.7% and 85.4%, respectively. The results demonstrate that all three methods perform well in evaluating landslide susceptibility. Among them, IVM-GC has the best performance.

  2. Improvement in the Accuracy of Flux Measurement of Radio Sources by Exploiting an Arithmetic Pattern in Photon Bunching Noise

    Energy Technology Data Exchange (ETDEWEB)

    Lieu, Richard [Department of Physics, University of Alabama, Huntsville, AL 35899 (United States)

    2017-07-20

    A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.

  3. Improvement in the accuracy of flux measurement of radio sources by exploiting an arithmetic pattern in photon bunching noise

    Science.gov (United States)

    Lieu, Richard

    2018-01-01

    A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.

  4. Adaptive sensor-based ultra-high accuracy solar concentrator tracker

    Science.gov (United States)

    Brinkley, Jordyn; Hassanzadeh, Ali

    2017-09-01

    Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.

  5. An Improved Seeding Algorithm of Magnetic Flux Lines Based on Data in 3D Space

    Directory of Open Access Journals (Sweden)

    Jia Zhong

    2015-05-01

    Full Text Available This paper will propose an approach to increase the accuracy and efficiency of seeding algorithms of magnetic flux lines in magnetic field visualization. To obtain accurate and reliable visualization results, the density of the magnetic flux lines should map the magnetic induction intensity, and seed points should determine the density of the magnetic flux lines. However, the traditional seeding algorithm, which is a statistical algorithm based on data, will produce errors when computing magnetic flux through subdivision of the plane. To achieve higher accuracy, more subdivisions should be made, which will reduce efficiency. This paper analyzes the errors made when the traditional seeding algorithm is used and gives an improved algorithm. It then validates the accuracy and efficiency of the improved algorithm by comparing the results of the two algorithms with results from the equivalent magnetic flux algorithm.

  6. Systematic review of discharge coding accuracy

    Science.gov (United States)

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  7. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  8. A systematic review on diagnostic accuracy of CT-based detection of significant coronary artery disease

    International Nuclear Information System (INIS)

    Janne d'Othee, Bertrand; Siebert, Uwe; Cury, Ricardo; Jadvar, Hossein; Dunn, Edward J.; Hoffmann, Udo

    2008-01-01

    Objectives: Systematic review of diagnostic accuracy of contrast enhanced coronary computed tomography (CE-CCT). Background: Noninvasive detection of coronary artery stenosis (CAS) by CE-CCT as an alternative to catheter-based coronary angiography (CCA) may improve patient management. Methods: Forty-one articles published between 1997 and 2006 were included that evaluated native coronary arteries for significant stenosis and used CE-CCT as diagnostic test and CCA as reference standard. Study group characteristics, study methodology and diagnostic outcomes were extracted. Pooled summary sensitivity and specificity of CE-CCT were calculated using a random effects model (1) for all coronary segments, (2) assessable segments, and (3) per patient. Results: The 41 studies totaled 2515 patients (75% males; mean age: 59 years, CAS prevalence: 59%). Analysis of all coronary segments yielded a sensitivity of 95% (80%, 89%, 86%, 98% for electron beam CT, 4/8-slice, 16-slice and 64-slice MDCT, respectively) for a specificity of 85% (77%, 84%, 95%, 91%). Analysis limited to segments deemed assessable by CT showed sensitivity of 96% (86%, 85%, 98%, 97%) for a specificity of 95% (90%, 96%, 96%, 96%). Per patient, sensitivity was 99% (90%, 97%, 99%, 98%) and specificity was 76% (59%, 81%, 83%, 92%). Heterogeneity was quantitatively important but not explainable by patient group characteristics or study methodology. Conclusions: Current diagnostic accuracy of CE-CCT is high. Advances in CT technology have resulted in increases in diagnostic accuracy and proportion of assessable coronary segments. However, per patient, accuracy may be lower and CT may have more limited clinical utility in populations at high risk for CAD

  9. Analysis of Correlation in MEMS Gyroscope Array and its Influence on Accuracy Improvement for the Combined Angular Rate Signal

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-01-01

    Full Text Available Obtaining a correlation factor is a prerequisite for fusing multiple outputs of a mircoelectromechanical system (MEMS gyroscope array and evaluating accuracy improvement. In this paper, a mathematical statistics method is established to analyze and obtain the practical correlation factor of a MEMS gyroscope array, which solves the problem of determining the Kalman filter (KF covariance matrix Q and fusing the multiple gyroscope signals. The working principle and mathematical model of the sensor array fusion is briefly described, and then an optimal estimate of input rate signal is achieved by using of a steady-state KF gain in an off-line estimation approach. Both theoretical analysis and simulation show that the negative correlation factor has a favorable influence on accuracy improvement. Additionally, a four-gyro array system composed of four discrete individual gyroscopes was developed to test the correlation factor and its influence on KF accuracy improvement. The result showed that correlation factors have both positive and negative values; in particular, there exist differences for correlation factor between the different units in the array. The test results also indicated that the Angular Random Walk (ARW of 1.57°/h0.5 and bias drift of 224.2°/h for a single gyroscope were reduced to 0.33°/h0.5 and 47.8°/h with some negative correlation factors existing in the gyroscope array, making a noise reduction factor of about 4.7, which is higher than that of a uncorrelated four-gyro array. The overall accuracy of the combined angular rate signal can be further improved if the negative correlation factors in the gyroscope array become larger.

  10. Accuracy of prehospital transport time estimation.

    Science.gov (United States)

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  11. New technology in dietary assessment: a review of digital methods in improving food record accuracy.

    Science.gov (United States)

    Stumbo, Phyllis J

    2013-02-01

    Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.

  12. Combining sequence-based prediction methods and circular dichroism and infrared spectroscopic data to improve protein secondary structure determinations

    Directory of Open Access Journals (Sweden)

    Lees Jonathan G

    2008-01-01

    Full Text Available Abstract Background A number of sequence-based methods exist for protein secondary structure prediction. Protein secondary structures can also be determined experimentally from circular dichroism, and infrared spectroscopic data using empirical analysis methods. It has been proposed that comparable accuracy can be obtained from sequence-based predictions as from these biophysical measurements. Here we have examined the secondary structure determination accuracies of sequence prediction methods with the empirically determined values from the spectroscopic data on datasets of proteins for which both crystal structures and spectroscopic data are available. Results In this study we show that the sequence prediction methods have accuracies nearly comparable to those of spectroscopic methods. However, we also demonstrate that combining the spectroscopic and sequences techniques produces significant overall improvements in secondary structure determinations. In addition, combining the extra information content available from synchrotron radiation circular dichroism data with sequence methods also shows improvements. Conclusion Combining sequence prediction with experimentally determined spectroscopic methods for protein secondary structure content significantly enhances the accuracy of the overall results obtained.

  13. Accuracy Improvement of Discharge Measurement with Modification of Distance Made Good Heading

    Directory of Open Access Journals (Sweden)

    Jongkook Lee

    2016-01-01

    Full Text Available Remote control boats equipped with an Acoustic Doppler Current Profiler (ADCP are widely accepted and have been welcomed by many hydrologists for water discharge, velocity profile, and bathymetry measurements. The advantages of this technique include high productivity, fast measurements, operator safety, and high accuracy. However, there are concerns about controlling and operating a remote boat to achieve measurement goals, especially during extreme events such as floods. When performing river discharge measurements, the main error source stems from the boat path. Due to the rapid flow in a flood condition, the boat path is not regular and this can cause errors in discharge measurements. Therefore, improvement of discharge measurements requires modification of boat path. As a result, the measurement errors in flood flow conditions are 12.3–21.8% before the modification of boat path, but 1.2–3.7% after the DMG modification of boat path. And it is considered that the modified discharges are very close to the observed discharge in the flood flow conditions. In this study, through the distance made good (DMG modification of the boat path, a comprehensive discharge measurement with high accuracy can be achieved.

  14. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    Science.gov (United States)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which

  15. Content in Context Improves Deception Detection Accuracy

    Science.gov (United States)

    Blair, J. Pete; Levine, Timothy R.; Shaw, Allison S.

    2010-01-01

    Past research has shown that people are only slightly better than chance at distinguishing truths from lies. Higher accuracy rates, however, are possible when contextual knowledge is used to judge the veracity of situated message content. The utility of content in context was shown in a series of experiments with students (N = 26, 45, 51, 25, 127)…

  16. Solving the stability-accuracy-diversity dilemma of recommender systems

    Science.gov (United States)

    Hou, Lei; Liu, Kecheng; Liu, Jianguo; Zhang, Runtong

    2017-02-01

    Recommender systems are of great significance in predicting the potential interesting items based on the target user's historical selections. However, the recommendation list for a specific user has been found changing vastly when the system changes, due to the unstable quantification of item similarities, which is defined as the recommendation stability problem. To improve the similarity stability and recommendation stability is crucial for the user experience enhancement and the better understanding of user interests. While the stability as well as accuracy of recommendation could be guaranteed by recommending only popular items, studies have been addressing the necessity of diversity which requires the system to recommend unpopular items. By ranking the similarities in terms of stability and considering only the most stable ones, we present a top- n-stability method based on the Heat Conduction algorithm (denoted as TNS-HC henceforth) for solving the stability-accuracy-diversity dilemma. Experiments on four benchmark data sets indicate that the TNS-HC algorithm could significantly improve the recommendation stability and accuracy simultaneously and still retain the high-diversity nature of the Heat Conduction algorithm. Furthermore, we compare the performance of the TNS-HC algorithm with a number of benchmark recommendation algorithms. The result suggests that the TNS-HC algorithm is more efficient in solving the stability-accuracy-diversity triple dilemma of recommender systems.

  17. METHODS OF DISTANCE MEASUREMENT’S ACCURACY INCREASING BASED ON THE CORRELATION ANALYSIS OF STEREO IMAGES

    Directory of Open Access Journals (Sweden)

    V. L. Kozlov

    2018-01-01

    Full Text Available To solve the problem of increasing the accuracy of restoring a three-dimensional picture of space using two-dimensional digital images, it is necessary to use new effective techniques and algorithms for processing and correlation analysis of digital images. Actively developed tools that allow you to reduce the time costs for processing stereo images, improve the quality of the depth maps construction and automate their construction. The aim of the work is to investigate the possibilities of using various techniques for processing digital images to improve the measurements accuracy of the rangefinder based on the correlation analysis of the stereo image. The results of studies of the influence of color channel mixing techniques on the distance measurements accuracy for various functions realizing correlation processing of images are presented. Studies on the analysis of the possibility of using integral representation of images to reduce the time cost in constructing a depth map areproposed. The results of studies of the possibility of using images prefiltration before correlation processing when distance measuring by stereo imaging areproposed.It is obtained that using of uniform mixing of channels leads to minimization of the total number of measurement errors, and using of brightness extraction according to the sRGB standard leads to an increase of errors number for all of the considered correlation processing techniques. Integral representation of the image makes it possible to accelerate the correlation processing, but this method is useful for depth map calculating in images no more than 0.5 megapixels. Using of image filtration before correlation processing can provide, depending on the filter parameters, either an increasing of the correlation function value, which is useful for analyzing noisy images, or compression of the correlation function.

  18. Interlaboratory diagnostic accuracy of a Salmonella specific PCR-based method

    DEFF Research Database (Denmark)

    Malorny, B.; Hoorfar, Jeffrey; Hugas, M.

    2003-01-01

    A collaborative study involving four European laboratories was conducted to investigate the diagnostic accuracy of a Salmonella specific PCR-based method, which was evaluated within the European FOOD-PCR project (http://www.pcr.dk). Each laboratory analysed by the PCR a set of independent obtained...... presumably naturally contaminated samples and compared the results with the microbiological culture method. The PCR-based method comprised a preenrichment step in buffered peptone water followed by a thermal cell lysis using a closed tube resin-based method. Artificially contaminated minced beef and whole......-based diagnostic methods and is currently proposed as international standard document....

  19. Including RNA secondary structures improves accuracy and robustness in reconstruction of phylogenetic trees.

    Science.gov (United States)

    Keller, Alexander; Förster, Frank; Müller, Tobias; Dandekar, Thomas; Schultz, Jörg; Wolf, Matthias

    2010-01-15

    In several studies, secondary structures of ribosomal genes have been used to improve the quality of phylogenetic reconstructions. An extensive evaluation of the benefits of secondary structure, however, is lacking. This is the first study to counter this deficiency. We inspected the accuracy and robustness of phylogenetics with individual secondary structures by simulation experiments for artificial tree topologies with up to 18 taxa and for divergency levels in the range of typical phylogenetic studies. We chose the internal transcribed spacer 2 of the ribosomal cistron as an exemplary marker region. Simulation integrated the coevolution process of sequences with secondary structures. Additionally, the phylogenetic power of marker size duplication was investigated and compared with sequence and sequence-structure reconstruction methods. The results clearly show that accuracy and robustness of Neighbor Joining trees are largely improved by structural information in contrast to sequence only data, whereas a doubled marker size only accounts for robustness. Individual secondary structures of ribosomal RNA sequences provide a valuable gain of information content that is useful for phylogenetics. Thus, the usage of ITS2 sequence together with secondary structure for taxonomic inferences is recommended. Other reconstruction methods as maximum likelihood, bayesian inference or maximum parsimony may equally profit from secondary structure inclusion. This article was reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber) and Eugene V. Koonin. Reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber) and Eugene V. Koonin. For the full reviews, please go to the Reviewers' comments section.

  20. Interactive dedicated training curriculum improves accuracy in the interpretation of MR imaging of prostate cancer

    International Nuclear Information System (INIS)

    Akin, Oguz; Zhang, Jingbo; Hricak, Hedvig; Riedl, Christopher C.; Ishill, Nicole M.; Moskowitz, Chaya S.

    2010-01-01

    To assess the effect of interactive dedicated training on radiology fellows' accuracy in assessing prostate cancer on MRI. Eleven radiology fellows, blinded to clinical and pathological data, independently interpreted preoperative prostate MRI studies, scoring the likelihood of tumour in the peripheral and transition zones and extracapsular extension. Each fellow interpreted 15 studies before dedicated training (to supply baseline interpretation accuracy) and 200 studies (10/week) after attending didactic lectures. Expert radiologists led weekly interactive tutorials comparing fellows' interpretations to pathological tumour maps. To assess interpretation accuracy, receiver operating characteristic (ROC) analysis was conducted, using pathological findings as the reference standard. In identifying peripheral zone tumour, fellows' average area under the ROC curve (AUC) increased from 0.52 to 0.66 (after didactic lectures; p < 0.0001) and remained at 0.66 (end of training; p < 0.0001); in the transition zone, their average AUC increased from 0.49 to 0.64 (after didactic lectures; p = 0.01) and to 0.68 (end of training; p = 0.001). In detecting extracapsular extension, their average AUC increased from 0.50 to 0.67 (after didactic lectures; p = 0.003) and to 0.81 (end of training; p < 0.0001). Interactive dedicated training significantly improved accuracy in tumour localization and especially in detecting extracapsular extension on prostate MRI. (orig.)

  1. Interactive dedicated training curriculum improves accuracy in the interpretation of MR imaging of prostate cancer

    Energy Technology Data Exchange (ETDEWEB)

    Akin, Oguz; Zhang, Jingbo; Hricak, Hedvig [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Riedl, Christopher C. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Medical University of Vienna, Department of Radiology, Vienna (Austria); Ishill, Nicole M.; Moskowitz, Chaya S. [Memorial Sloan-Kettering Cancer Center, Epidemiology and Biostatistics, New York, NY (United States)

    2010-04-15

    To assess the effect of interactive dedicated training on radiology fellows' accuracy in assessing prostate cancer on MRI. Eleven radiology fellows, blinded to clinical and pathological data, independently interpreted preoperative prostate MRI studies, scoring the likelihood of tumour in the peripheral and transition zones and extracapsular extension. Each fellow interpreted 15 studies before dedicated training (to supply baseline interpretation accuracy) and 200 studies (10/week) after attending didactic lectures. Expert radiologists led weekly interactive tutorials comparing fellows' interpretations to pathological tumour maps. To assess interpretation accuracy, receiver operating characteristic (ROC) analysis was conducted, using pathological findings as the reference standard. In identifying peripheral zone tumour, fellows' average area under the ROC curve (AUC) increased from 0.52 to 0.66 (after didactic lectures; p < 0.0001) and remained at 0.66 (end of training; p < 0.0001); in the transition zone, their average AUC increased from 0.49 to 0.64 (after didactic lectures; p = 0.01) and to 0.68 (end of training; p = 0.001). In detecting extracapsular extension, their average AUC increased from 0.50 to 0.67 (after didactic lectures; p = 0.003) and to 0.81 (end of training; p < 0.0001). Interactive dedicated training significantly improved accuracy in tumour localization and especially in detecting extracapsular extension on prostate MRI. (orig.)

  2. Fast Rotation-Free Feature-Based Image Registration Using Improved N-SIFT and GMM-Based Parallel Optimization.

    Science.gov (United States)

    Yu, Dongdong; Yang, Feng; Yang, Caiyun; Leng, Chengcai; Cao, Jian; Wang, Yining; Tian, Jie

    2016-08-01

    Image registration is a key problem in a variety of applications, such as computer vision, medical image processing, pattern recognition, etc., while the application of registration is limited by time consumption and the accuracy in the case of large pose differences. Aimed at these two kinds of problems, we propose a fast rotation-free feature-based rigid registration method based on our proposed accelerated-NSIFT and GMM registration-based parallel optimization (PO-GMMREG). Our method is accelerated by using the GPU/CUDA programming and preserving only the location information without constructing the descriptor of each interest point, while its robustness to missing correspondences and outliers is improved by converting the interest point matching to Gaussian mixture model alignment. The accuracy in the case of large pose differences is settled by our proposed PO-GMMREG algorithm by constructing a set of initial transformations. Experimental results demonstrate that our proposed algorithm can fast rigidly register 3-D medical images and is reliable for aligning 3-D scans even when they exhibit a poor initialization.

  3. An improved recommended algorithm for network structure based on two partial graphs

    Directory of Open Access Journals (Sweden)

    Deng Song

    2017-08-01

    Full Text Available In this thesis,we introduce an improved algorithm based on network structure.Based on the standard material diffusion algorithm,considering the influence of the user's score on the recommendation,the adjustment factor of the initial resource allocation vector and the resource transfer matrix in the recommendation algorithm is improved.Using the practical data set from GroupLens webite to evaluate the performance of the proposed algorithm,we performed a series of experiments.The experimental results reveal that it can yield better recommendation accuracy and has higher hitting rate than collaborative filtering,network-based inference.It can solve the problem of cold start and scalability in the standard material diffusion algorithm.And it also can make the recommendation results diversified.

  4. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    Science.gov (United States)

    Wick, Gary A.; Emery, William J.; Castro, Sandra L.; Lindstrom, Eric (Technical Monitor)

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work was performed in two different major areas. The first centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. The second involved a modeling and data analysis effort whereby modeled near-surface temperature profiles were integrated into the retrieval of bulk SST estimates from existing satellite data. Under the first work area, two different seagoing infrared radiometers were designed and fabricated and the first of these was deployed on research ships during two major experiments. Analyses of these data contributed significantly to the Ph.D. thesis of one graduate student and these results are currently being converted into a journal publication. The results of the second portion of work demonstrated that, with presently available models and heat flux estimates, accuracy improvements in SST retrievals associated with better physical treatment of the near-surface layer were partially balanced by uncertainties in the models and extra required input data. While no significant accuracy improvement was observed in this experiment, the results are very encouraging for future applications where improved models and coincident environmental data will be available. These results are included in a manuscript undergoing final review with the Journal of Atmospheric and Oceanic Technology.

  5. Geometric modeling in the problem of ball bearing accuracy

    Science.gov (United States)

    Glukhov, V. I.; Pushkarev, V. V.; Khomchenko, V. G.

    2017-06-01

    The manufacturing quality of ball bearings is an urgent problem for machine-building industry. The aim of the research is to improve the geometric specifications accuracy of bearings based on evidence-based systematic approach and the method of adequate size, location and form deviations modeling of the rings and assembled ball bearings. The present work addressed the problem of bearing geometric specifications identification and the study of these specifications. The deviation from symmetric planar of rings and bearings assembly and mounting width are among these specifications. A systematic approach to geometric specifications values and ball bearings tolerances normalization in coordinate systems will improve the quality of bearings by optimizing and minimizing the number of specifications. The introduction of systematic approach to the international standards on rolling bearings is a guarantee of a significant increase in accuracy of bearings and the quality of products where they are applied.

  6. Attitude Modeling Using Kalman Filter Approach for Improving the Geometric Accuracy of Cartosat-1 Data Products

    Directory of Open Access Journals (Sweden)

    Nita H. SHAH

    2010-07-01

    Full Text Available This paper deals with the rigorous photogrammetric solution to model the uncertainty in the orientation parameters of Indian Remote Sensing Satellite IRS-P5 (Cartosat-1. Cartosat-1 is a three axis stabilized spacecraft launched into polar sun-synchronous circular orbit at an altitude of 618 km. The satellite has two panchromatic (PAN cameras with nominal resolution of ~2.5 m. The camera looking ahead is called FORE mounted with +26 deg angle and the other looking near nadir is called AFT mounted with -5 deg, in along track direction. Data Product Generation Software (DPGS system uses the rigorous photogrammetric Collinearity model in order to utilize the full system information, together with payload geometry & control points, for estimating the uncertainty in attitude parameters. The initial orbit, attitude knowledge is obtained from GPS bound orbit measurement, star tracker and gyros. The variations in satellite attitude with time are modelled using simple linear polynomial model. Also, based on this model, Kalman filter approach is studied and applied to improve the uncertainty in the orientation of spacecraft with high quality ground control points (GCPs. The sequential estimator (Kalman filter is used in an iterative process which corrects the parameters at each time of observation rather than at epoch time. Results are presented for three stereo data sets. The accuracy of model depends on the accuracy of the control points.

  7. An Investigation to Improve Classifier Accuracy for Myo Collected Data

    Science.gov (United States)

    2017-02-01

    Bad Samples Effect on Classification Accuracy 7 5.1 Naïve Bayes (NB) Classifier Accuracy 7 5.2 Logistic Model Tree (LMT) 10 5.3 K-Nearest Neighbor...gesture, pitch feature, user 06. All samples exhibit reversed movement...20 Fig. A-2 Come gesture, pitch feature, user 14. All samples exhibit reversed movement

  8. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data.

    Science.gov (United States)

    Januel, Jean-Marie; Luthi, Jean-Christophe; Quan, Hude; Borst, François; Taffé, Patrick; Ghali, William A; Burnand, Bernard

    2011-08-18

    Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

  9. Accuracy of an improved device for remote measuring of tree-trunk diameters

    International Nuclear Information System (INIS)

    Matsushita, T.; Kato, S.; Komiyama, A.

    2000-01-01

    For measuring the diameters of tree trunks from a distant position, a recent device using a laser beam was developed by Kantou. We improved this device to serve our own practical purposes. The improved device consists of a 1-m-long metal caliper and a small telescope sliding smoothly onto it. Using the cross hairs in the scope, one can measure both edges of an object on the caliper and calculate its length. The laser beam is used just for guiding the telescopic sights to the correct positions on the object. In this study, the accuracy of this new device was examined by measuring objects of differing lengths, the distance from the object, and the angle of elevation to the object. Since each result of the experiment predicted absolute errors of measurement of less than 3 mm, this new device will be suitable for the measurement of trunk diameters in the field

  10. Incorporation of unique molecular identifiers in TruSeq adapters improves the accuracy of quantitative sequencing.

    Science.gov (United States)

    Hong, Jungeui; Gresham, David

    2017-11-01

    Quantitative analysis of next-generation sequencing (NGS) data requires discriminating duplicate reads generated by PCR from identical molecules that are of unique origin. Typically, PCR duplicates are identified as sequence reads that align to the same genomic coordinates using reference-based alignment. However, identical molecules can be independently generated during library preparation. Misidentification of these molecules as PCR duplicates can introduce unforeseen biases during analyses. Here, we developed a cost-effective sequencing adapter design by modifying Illumina TruSeq adapters to incorporate a unique molecular identifier (UMI) while maintaining the capacity to undertake multiplexed, single-index sequencing. Incorporation of UMIs into TruSeq adapters (TrUMIseq adapters) enables identification of bona fide PCR duplicates as identically mapped reads with identical UMIs. Using TrUMIseq adapters, we show that accurate removal of PCR duplicates results in improved accuracy of both allele frequency (AF) estimation in heterogeneous populations using DNA sequencing and gene expression quantification using RNA-Seq.

  11. Iterative metal artifact reduction improves dose calculation accuracy. Phantom study with dental implants

    Energy Technology Data Exchange (ETDEWEB)

    Maerz, Manuel; Mittermair, Pia; Koelbl, Oliver; Dobler, Barbara [Regensburg University Medical Center, Department of Radiotherapy, Regensburg (Germany); Krauss, Andreas [Siemens Healthcare GmbH, Forchheim (Germany)

    2016-06-15

    Metallic dental implants cause severe streaking artifacts in computed tomography (CT) data, which affect the accuracy of dose calculations in radiation therapy. The aim of this study was to investigate the benefit of the metal artifact reduction algorithm iterative metal artifact reduction (iMAR) in terms of correct representation of Hounsfield units (HU) and dose calculation accuracy. Heterogeneous phantoms consisting of different types of tissue equivalent material surrounding metallic dental implants were designed. Artifact-containing CT data of the phantoms were corrected using iMAR. Corrected and uncorrected CT data were compared to synthetic CT data to evaluate accuracy of HU reproduction. Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were calculated in Oncentra v4.3 on corrected and uncorrected CT data and compared to Gafchromic trademark EBT3 films to assess accuracy of dose calculation. The use of iMAR increased the accuracy of HU reproduction. The average deviation of HU decreased from 1006 HU to 408 HU in areas including metal and from 283 HU to 33 HU in tissue areas excluding metal. Dose calculation accuracy could be significantly improved for all phantoms and plans: The mean passing rate for gamma evaluation with 3 % dose tolerance and 3 mm distance to agreement increased from 90.6 % to 96.2 % if artifacts were corrected by iMAR. The application of iMAR allows metal artifacts to be removed to a great extent which leads to a significant increase in dose calculation accuracy. (orig.) [German] Metallische Implantate verursachen streifenfoermige Artefakte in CT-Bildern, welche die Dosisberechnung beeinflussen. In dieser Studie soll der Nutzen des iterativen Metall-Artefakt-Reduktions-Algorithmus iMAR hinsichtlich der Wiedergabetreue von Hounsfield-Werten (HU) und der Genauigkeit von Dosisberechnungen untersucht werden. Es wurden heterogene Phantome aus verschiedenen Arten gewebeaequivalenten Materials mit

  12. Computed tomographic simulation of craniospinal fields in pediatric patients: improved treatment accuracy and patient comfort.

    Science.gov (United States)

    Mah, K; Danjoux, C E; Manship, S; Makhani, N; Cardoso, M; Sixel, K E

    1998-07-15

    To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of > or =5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall, it has improved staff efficiency and resource utilization.

  13. Computed tomographic simulation of craniospinal fields in pediatric patients: improved treatment accuracy and patient comfort

    International Nuclear Information System (INIS)

    Mah, Katherine; Danjoux, Cyril E.; Manship, Sharan; Makhani, Nadiya; Cardoso, Marlene; Sixel, Katharina E.

    1998-01-01

    Purpose: To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. Methods and Materials: A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Results: Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was <10 min in all cases. In the absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of ≥5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Conclusions: Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall

  14. Using quality scores and longer reads improves accuracy of Solexa read mapping

    Directory of Open Access Journals (Sweden)

    Xuan Zhenyu

    2008-02-01

    Full Text Available Abstract Background Second-generation sequencing has the potential to revolutionize genomics and impact all areas of biomedical science. New technologies will make re-sequencing widely available for such applications as identifying genome variations or interrogating the oligonucleotide content of a large sample (e.g. ChIP-sequencing. The increase in speed, sensitivity and availability of sequencing technology brings demand for advances in computational technology to perform associated analysis tasks. The Solexa/Illumina 1G sequencer can produce tens of millions of reads, ranging in length from ~25–50 nt, in a single experiment. Accurately mapping the reads back to a reference genome is a critical task in almost all applications. Two sources of information that are often ignored when mapping reads from the Solexa technology are the 3' ends of longer reads, which contain a much higher frequency of sequencing errors, and the base-call quality scores. Results To investigate whether these sources of information can be used to improve accuracy when mapping reads, we developed the RMAP tool, which can map reads having a wide range of lengths and allows base-call quality scores to determine which positions in each read are more important when mapping. We applied RMAP to analyze data re-sequenced from two human BAC regions for varying read lengths, and varying criteria for use of quality scores. RMAP is freely available for downloading at http://rulai.cshl.edu/rmap/. Conclusion Our results indicate that significant gains in Solexa read mapping performance can be achieved by considering the information in 3' ends of longer reads, and appropriately using the base-call quality scores. The RMAP tool we have developed will enable researchers to effectively exploit this information in targeted re-sequencing projects.

  15. Improved accuracy in estimation of left ventricular function parameters from QGS software with Tc-99m tetrofosmin gated-SPECT. A multivariate analysis

    International Nuclear Information System (INIS)

    Okizaki, Atsutaka; Shuke, Noriyuki; Sato, Junichi; Ishikawa, Yukio; Yamamoto, Wakako; Kikuchi, Kenjiro; Aburano, Tamio

    2003-01-01

    The purpose of this study was to verify whether the accuracy of left ventricular parameters related to left ventricular function from gated-SPECT improved or not, using multivariate analysis. Ninety-six patients with cardiovascular diseases were studied. Gated-SPECT with the quantitative gated SPECT (QGS) software and left ventriculography (LVG) were performed to obtain left ventricular ejection fraction (LVEF), end-diastolic volume (EDV) and end-systolic volume (ESV). Then, multivariate analyses were performed to determine empirical formulas for predicting these parameters. The calculated values of left ventricular parameters were compared with those obtained directly from the QGS software and LVG. Multivariate analyses were able to improve accuracy in estimation of LVEF, EDV and ESV. Statistically significant improvement was seen in LVEF (from r=0.6965 to r=0.8093, p<0.05). Although not statistically significant, improvements in correlation coefficients were seen in EDV (from r=0.7199 to r=0.7595, p=0.2750) and ESV (from r=0.5694 to r=0.5871, p=0.4281). The empirical equations with multivariate analysis improved the accuracy in estimating LVEF from gated-SPECT with the QGS software. (author)

  16. Improvement of diagnostic accuracy, and clinical evaluation of computed tomography and ultrasonography for deep seated cancers

    International Nuclear Information System (INIS)

    Arimizu, Noboru

    1980-01-01

    Cancers of the liver, gallbladder, and pancreas which were difficult to be detected at an early stage were studied. Diagnostic accuracy of CT and ultrasonography for vesectable small cancers was investigated by the project team and coworkers. Only a few cases of hepatocellular carcinoma, cancer of the common bile duct, and cancer of the pancreas head, with the maximum diameter of 1 - 2 cm, were able to be diagnosed by CT. There seemed to be more false negative cases with small cancers of that size. The limit of the size which could be detected by CT was thought to be 2 - 3 cm. Similar results were obtained by ultrasonography. Cancer of the pancreas body with the maximum diameter of less than 3.5 cm could not be detected by both CT and ultrasonography. Diagnostic accuracy of CT for liver cancer was improved by selective intraarterial injection of contrast medium. Improvement of the quality of ultrasonograms was achieved through this study. Merits and demerits of CT and ultrasonography were also compared. (Tsunoda, M.)

  17. Improvement of diagnostic accuracy, and clinical evaluation of computed tomography and ultrasonography for deep seated cancers

    Energy Technology Data Exchange (ETDEWEB)

    Arimizu, N [Chiba Univ. (Japan). School of Medicine

    1980-06-01

    Cancers of the liver, gallbladder, and pancreas which were difficult to be detected at an early stage were studied. Diagnostic accuracy of CT and ultrasonography for resectable small cancers was investigated by the project team and co-workers. Only a few cases of hepatocellular carcinoma, cancer of the common bile duct, and cancer of the pancreas head, with the maximum diameter of 1 - 2 cm, were able to be diagnosed by CT. There seemed to be more false negative cases with small cancers of that size. The limit of the size which could be detected by CT was thought to be 2 - 3 cm. Similar results were obtained by ultrasonography. Cancer of the pancreas body with the maximum diameter of less than 3.5 cm could not be detected by both CT and ultrasonography. Diagnostic accuracy of CT for liver cancer was improved by selective intraarterial injection of contrast medium. Improvement of the quality of ultrasonograms was achieved through this study. Merits and demerits of CT and ultrasonography were also compared.

  18. Accuracy of Ultrasound-Based Image Guidance for Daily Positioning of the Upper Abdomen: An Online Comparison With Cone Beam CT

    International Nuclear Information System (INIS)

    Boda-Heggemann, Judit; Mennemeyer, Philipp; Wertz, Hansjoerg; Riesenacker, Nadja; Kuepper, Beate; Lohr, Frank; Wenz, Frederik

    2009-01-01

    Purpose: Image-guided intensity-modulated radiotherapy can improve protection of organs at risk when large abdominal target volumes are irradiated. We estimated the daily positioning accuracy of ultrasound-based image guidance for abdominal target volumes by a direct comparison of daily imaging obtained with cone beam computed tomography (CBCT). Methods and Materials: Daily positioning (n = 83 positionings) of 15 patients was completed by using ultrasound guidance after an initial CBCT was obtained. Residual error after ultrasound was estimated by comparison with a second CBCT. Ultrasound image quality was visually rated using a scale of 1 to 4. Results: Of 15 patients, 7 patients had good sonographic imaging quality, 5 patients had satisfactory sonographic quality, and 3 patients were excluded because of unsatisfactory sonographic quality. When image quality was good, residual errors after ultrasound were -0.1 ± 3.11 mm in the x direction (left-right; group systematic error M = -0.09 mm; standard deviation [SD] of systematic error, Σ = 1.37 mm; SD of the random error, σ = 2.99 mm), 0.93 ± 4.31 mm in the y direction (superior-inferior, M = 1.12 mm; Σ = 2.96 mm; σ = 3.39 mm), and 0.71 ± 3.15 mm in the z direction (anteroposterior; M = 1.01 mm; Σ = 2.46 mm; σ = 2.24 mm). For patients with satisfactory image quality, residual error after ultrasound was -0.6 ± 5.26 mm in the x (M = 0.07 mm; Σ = 5.67 mm; σ = 4.86 mm), 1.76 ± 4.92 mm in the y (M = 3.54 mm; Σ = 4.1 mm; σ = 5.29 mm), and 1.19 ± 4.75 mm in the z (M = 0.82 mm; Σ = 2.86 mm; σ = 3.05 mm) directions. Conclusions: In patients from whom good sonographic image quality could be obtained, ultrasound improved daily positioning accuracy. In the case of satisfactory image quality, ultrasound guidance improved accuracy compared to that of skin marks only minimally. If sonographic image quality was unsatisfactory, daily CBCT scanning improved treatment accuracy distinctly over that of ultrasound. Use of

  19. Accuracy of volumetric measurement of simulated root resorption lacunas based on cone beam computed tomography.

    Science.gov (United States)

    Wang, Y; He, S; Guo, Y; Wang, S; Chen, S

    2013-08-01

    To evaluate the accuracy of volumetric measurement of simulated root resorption cavities based on cone beam computed tomography (CBCT), in comparison with that of Micro-computed tomography (Micro-CT) which served as the reference. The State Key Laboratory of Oral Diseases at Sichuan University. Thirty-two bovine teeth were included for standardized CBCT scanning and Micro-CT scanning before and after the simulation of different degrees of root resorption. The teeth were divided into three groups according to the depths of the root resorption cavity (group 1: 0.15, 0.2, 0.3 mm; group 2: 0.6, 1.0 mm; group 3: 1.5, 2.0, 3.0 mm). Each depth included four specimens. Differences in tooth volume before and after simulated root resorption were then calculated from CBCT and Micro-CT scans, respectively. The overall between-method agreement of the measurements was evaluated using the concordance correlation coefficient (CCC). For the first group, the average volume of resorption cavity was 1.07 mm(3) , and the between-method agreement of measurement for the volume changes was low (CCC = 0.098). For the second and third groups, the average volumes of resorption cavities were 3.47 and 6.73 mm(3) respectively, and the between-method agreements were good (CCC = 0.828 and 0.895, respectively). The accuracy of 3-D quantitative volumetric measurement of simulated root resorption based on CBCT was fairly good in detecting simulated resorption cavities larger than 3.47 mm(3), while it was not sufficient for measuring resorption cavities smaller than 1.07 mm(3) . This method could be applied in future studies of root resorption although further studies are required to improve its accuracy. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Including RNA secondary structures improves accuracy and robustness in reconstruction of phylogenetic trees

    Directory of Open Access Journals (Sweden)

    Dandekar Thomas

    2010-01-01

    Full Text Available Abstract Background In several studies, secondary structures of ribosomal genes have been used to improve the quality of phylogenetic reconstructions. An extensive evaluation of the benefits of secondary structure, however, is lacking. Results This is the first study to counter this deficiency. We inspected the accuracy and robustness of phylogenetics with individual secondary structures by simulation experiments for artificial tree topologies with up to 18 taxa and for divergency levels in the range of typical phylogenetic studies. We chose the internal transcribed spacer 2 of the ribosomal cistron as an exemplary marker region. Simulation integrated the coevolution process of sequences with secondary structures. Additionally, the phylogenetic power of marker size duplication was investigated and compared with sequence and sequence-structure reconstruction methods. The results clearly show that accuracy and robustness of Neighbor Joining trees are largely improved by structural information in contrast to sequence only data, whereas a doubled marker size only accounts for robustness. Conclusions Individual secondary structures of ribosomal RNA sequences provide a valuable gain of information content that is useful for phylogenetics. Thus, the usage of ITS2 sequence together with secondary structure for taxonomic inferences is recommended. Other reconstruction methods as maximum likelihood, bayesian inference or maximum parsimony may equally profit from secondary structure inclusion. Reviewers This article was reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber and Eugene V. Koonin. Open peer review Reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber and Eugene V. Koonin. For the full reviews, please go to the Reviewers' comments section.

  1. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy

    KAUST Repository

    Ogorzalek, Tadeusz L.

    2018-01-04

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As HT, solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. This article is protected by copyright. All rights reserved.

  2. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy

    KAUST Repository

    Ogorzalek, Tadeusz L.; Hura, Greg L.; Belsom, Adam; Burnett, Kathryn H.; Kryshtafovych, Andriy; Tainer, John A.; Rappsilber, Juri; Tsutakawa, Susan E.; Fidelis, Krzysztof

    2018-01-01

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As HT, solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. This article is protected by copyright. All rights reserved.

  3. Stratified computed tomography findings improve diagnostic accuracy for appendicitis

    Science.gov (United States)

    Park, Geon; Lee, Sang Chul; Choi, Byung-Jo; Kim, Say-June

    2014-01-01

    AIM: To improve the diagnostic accuracy in patients with symptoms and signs of appendicitis, but without confirmative computed tomography (CT) findings. METHODS: We retrospectively reviewed the database of 224 patients who had been operated on for the suspicion of appendicitis, but whose CT findings were negative or equivocal for appendicitis. The patient population was divided into two groups: a pathologically proven appendicitis group (n = 177) and a non-appendicitis group (n = 47). The CT images of these patients were re-evaluated according to the characteristic CT features as described in the literature. The re-evaluations and baseline characteristics of the two groups were compared. RESULTS: The two groups showed significant differences with respect to appendiceal diameter, and the presence of periappendiceal fat stranding and intraluminal air in the appendix. A larger proportion of patients in the appendicitis group showed distended appendices larger than 6.0 mm (66.3% vs 37.0%; P appendicitis group. Furthermore, the presence of two or more of these factors increased the odds ratio to 6.8 times higher than baseline (95%CI: 3.013-15.454; P appendicitis with equivocal CT findings. PMID:25320531

  4. Quality systems for radiotherapy: Impact by a central authority for improved accuracy, safety and accident prevention

    International Nuclear Information System (INIS)

    Jaervinen, H.; Sipilae, P.; Parkkinen, R.; Kosunen, A.; Jokelainen, I.

    2001-01-01

    High accuracy in radiotherapy is required for the good outcome of the treatments, which in turn implies the need to develop comprehensive Quality Systems for the operation of the clinic. The legal requirements as well as the recommendation by professional societies support this modern approach for improved accuracy, safety and accident prevention. The actions of a national radiation protection authority can play an important role in this development. In this paper, the actions of the authority in Finland (STUK) for the control of the implementation of the new requirements are reviewed. It is concluded that the role of the authorities should not be limited to simple control actions, but comprehensive practical support for the development of the Quality Systems should be provided. (author)

  5. A New Approach to Improve Accuracy of Grey Model GMC(1,n in Time Series Prediction

    Directory of Open Access Journals (Sweden)

    Sompop Moonchai

    2015-01-01

    Full Text Available This paper presents a modified grey model GMC(1,n for use in systems that involve one dependent system behavior and n-1 relative factors. The proposed model was developed from the conventional GMC(1,n model in order to improve its prediction accuracy by modifying the formula for calculating the background value, the system of parameter estimation, and the model prediction equation. The modified GMC(1,n model was verified by two cases: the study of forecasting CO2 emission in Thailand and forecasting electricity consumption in Thailand. The results demonstrated that the modified GMC(1,n model was able to achieve higher fitting and prediction accuracy compared with the conventional GMC(1,n and D-GMC(1,n models.

  6. A Framework Based on Reference Data with Superordinate Accuracy for the Quality Analysis of Terrestrial Laser Scanning-Based Multi-Sensor-Systems.

    Science.gov (United States)

    Stenz, Ulrich; Hartmann, Jens; Paffenholz, Jens-André; Neumann, Ingo

    2017-08-16

    Terrestrial laser scanning (TLS) is an efficient solution to collect large-scale data. The efficiency can be increased by combining TLS with additional sensors in a TLS-based multi-sensor-system (MSS). The uncertainty of scanned points is not homogenous and depends on many different influencing factors. These include the sensor properties, referencing, scan geometry (e.g., distance and angle of incidence), environmental conditions (e.g., atmospheric conditions) and the scanned object (e.g., material, color and reflectance, etc.). The paper presents methods, infrastructure and results for the validation of the suitability of TLS and TLS-based MSS. Main aspects are the backward modelling of the uncertainty on the basis of reference data (e.g., point clouds) with superordinate accuracy and the appropriation of a suitable environment/infrastructure (e.g., the calibration process of the targets for the registration of laser scanner and laser tracker data in a common coordinate system with high accuracy) In this context superordinate accuracy means that the accuracy of the acquired reference data is better by a factor of 10 than the data of the validated TLS and TLS-based MSS. These aspects play an important role in engineering geodesy, where the aimed accuracy lies in a range of a few mm or less.

  7. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  8. Accuracy of magnetic resonance based susceptibility measurements

    Science.gov (United States)

    Erdevig, Hannah E.; Russek, Stephen E.; Carnicka, Slavka; Stupic, Karl F.; Keenan, Kathryn E.

    2017-05-01

    Magnetic Resonance Imaging (MRI) is increasingly used to map the magnetic susceptibility of tissue to identify cerebral microbleeds associated with traumatic brain injury and pathological iron deposits associated with neurodegenerative diseases such as Parkinson's and Alzheimer's disease. Accurate measurements of susceptibility are important for determining oxygen and iron content in blood vessels and brain tissue for use in noninvasive clinical diagnosis and treatment assessments. Induced magnetic fields with amplitude on the order of 100 nT, can be detected using MRI phase images. The induced field distributions can then be inverted to obtain quantitative susceptibility maps. The focus of this research was to determine the accuracy of MRI-based susceptibility measurements using simple phantom geometries and to compare the susceptibility measurements with magnetometry measurements where SI-traceable standards are available. The susceptibilities of paramagnetic salt solutions in cylindrical containers were measured as a function of orientation relative to the static MRI field. The observed induced fields as a function of orientation of the cylinder were in good agreement with simple models. The MRI susceptibility measurements were compared with SQUID magnetometry using NIST-traceable standards. MRI can accurately measure relative magnetic susceptibilities while SQUID magnetometry measures absolute magnetic susceptibility. Given the accuracy of moment measurements of tissue mimicking samples, and the need to look at small differences in tissue properties, the use of existing NIST standard reference materials to calibrate MRI reference structures is problematic and better reference materials are required.

  9. Teaching accuracy and reliability for student projects

    Science.gov (United States)

    Fisher, Nick

    2002-09-01

    Physics students at Rugby School follow the Salters Horners A-level course, which involves working on a two-week practical project of their own choosing. Pupils often misunderstand the concepts of accuracy and reliability, believing, for example, that repeating readings makes them more accurate and more reliable, whereas all it does is help to check repeatability. The course emphasizes the ideas of checking anomalous points, improving accuracy and making readings more sensitive. This article describes how we teach pupils in preparation for their projects. Based on many years of running such projects, much of this material is from a short booklet that we give out to pupils, when we train them in practical project skills.

  10. A new source difference artificial neural network for enhanced positioning accuracy

    International Nuclear Information System (INIS)

    Bhatt, Deepak; Aggarwal, Priyanka; Devabhaktuni, Vijay; Bhattacharya, Prabir

    2012-01-01

    Integrated inertial navigation system (INS) and global positioning system (GPS) units provide reliable navigation solution compared to standalone INS or GPS. Traditional Kalman filter-based INS/GPS integration schemes have several inadequacies related to sensor error model and immunity to noise. Alternatively, multi-layer perceptron (MLP) neural networks with three layers have been implemented to improve the position accuracy of the integrated system. However, MLP neural networks show poor accuracy for low-cost INS because of the large inherent sensor errors. For the first time the paper demonstrates the use of knowledge-based source difference artificial neural network (SDANN) to improve navigation performance of low-cost sensor, with or without external aiding sources. Unlike the conventional MLP or artificial neural networks (ANN), the structure of SDANN consists of two MLP neural networks called the coarse model and the difference model. The coarse model learns the input–output data relationship whereas the difference model adds knowledge to the system and fine-tunes the coarse model output by learning the associated training or estimation error. Our proposed SDANN model illustrated a significant improvement in navigation accuracy of up to 81% over conventional MLP. The results demonstrate that the proposed SDANN method is effective for GPS/INS integration schemes using low-cost inertial sensors, with and without GPS

  11. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, A. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 (United States); Ranallo, F. N. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53792 (United States); Judy, P. F. [Brigham and Women’s Hospital, Boston, Massachusetts 02115 (United States); Gierada, D. S. [Department of Radiology, Washington University, St. Louis, Missouri 63110 (United States); Fain, S. B., E-mail: sfain@wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53792 (United States); Department of Biomedical Engineering,University of Wisconsin School of Engineering, Madison, Wisconsin 53706 (United States)

    2014-11-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel.

  12. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    International Nuclear Information System (INIS)

    Rodriguez, A.; Ranallo, F. N.; Judy, P. F.; Gierada, D. S.; Fain, S. B.

    2014-01-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel

  13. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Directory of Open Access Journals (Sweden)

    Oleksandr Makeyev

    2016-06-01

    Full Text Available Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1-polar electrode with n rings using the (4n + 1-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2 and quadripolar (n = 3 electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  14. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Science.gov (United States)

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  15. Theoretical study on new bias factor methods to effectively use critical experiments for improvement of prediction accuracy of neutronic characteristics

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Mori, Takamasa; Takeda, Toshikazu

    2007-01-01

    Extended bias factor methods are proposed with two new concepts, the LC method and the PE method, in order to effectively use critical experiments and to enhance the applicability of the bias factor method for the improvement of the prediction accuracy of neutronic characteristics of a target core. Both methods utilize a number of critical experimental results and produce a semifictitious experimental value with them. The LC and PE methods define the semifictitious experimental values by a linear combination of experimental values and the product of exponentiated experimental values, respectively, and the corresponding semifictitious calculation values by those of calculation values. A bias factor is defined by the ratio of the semifictitious experimental value to the semifictitious calculation value in both methods. We formulate how to determine weights for the LC method and exponents for the PE method in order to minimize the variance of the design prediction value obtained by multiplying the design calculation value by the bias factor. From a theoretical comparison of these new methods with the conventional method which utilizes a single experimental result and the generalized bias factor method which was previously proposed to utilize a number of experimental results, it is concluded that the PE method is the most useful method for improving the prediction accuracy. The main advantages of the PE method are summarized as follows. The prediction accuracy is necessarily improved compared with the design calculation value even when experimental results include large experimental errors. This is a special feature that the other methods do not have. The prediction accuracy is most effectively improved by utilizing all the experimental results. From these facts, it can be said that the PE method effectively utilizes all the experimental results and has a possibility to make a full-scale-mockup experiment unnecessary with the use of existing and future benchmark

  16. BAYESIAN FORECASTS COMBINATION TO IMPROVE THE ROMANIAN INFLATION PREDICTIONS BASED ON ECONOMETRIC MODELS

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu

    2014-12-01

    Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.

  17. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    Science.gov (United States)

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  18. Test Expectancy Affects Metacomprehension Accuracy

    Science.gov (United States)

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  19. Secondary Signs May Improve the Diagnostic Accuracy of Equivocal Ultrasounds for Suspected Appendicitis in Children

    Science.gov (United States)

    Partain, Kristin N.; Patel, Adarsh; Travers, Curtis; McCracken, Courtney; Loewen, Jonathan; Braithwaite, Kiery; Heiss, Kurt F.; Raval, Mehul V.

    2016-01-01

    Introduction Ultrasound (US) is the preferred imaging modality for evaluating appendicitis. Our purpose was to determine if including secondary signs (SS) improves diagnostic accuracy in equivocal US studies. Methods Retrospective review identified 825 children presenting with concern for appendicitis and with a right lower quadrant (RLQ) US. Regression models identified which SS were associated with appendicitis. Test characteristics were demonstrated. Results 530 patients (64%) had equivocal US reports. Of 114 (22%) patients with equivocal US undergoing CT, those with SS were more likely to have appendicitis (48.6% vs 14.6%, pappendicitis (61.0% vs 33.6%, pappendicitis included fluid collection (adjusted odds ratio (OR) 13.3, 95% Confidence Interval (CI) 2.1–82.8), hyperemia (OR=2.0, 95%CI 1.5–95.5), free fluid (OR=9.8, 95%CI 3.8–25.4), and appendicolith (OR=7.9, 95%CI 1.7–37.2). Wall thickness, bowel peristalsis, and echogenic fat were not associated with appendicitis. Equivocal US that included hyperemia, a fluid collection, or an appendicolith had 96% specificity and 88% accuracy. Conclusion Use of SS in RLQ US assists in the diagnostic accuracy of appendicitis. SS may guide clinicians and reduce unnecessary CT and admissions. PMID:27039121

  20. An improved feature extraction algorithm based on KAZE for multi-spectral image

    Science.gov (United States)

    Yang, Jianping; Li, Jun

    2018-02-01

    Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.

  1. Improving Prediction Accuracy of a Rate-Based Model of an MEA-Based Carbon Capture Process for Large-Scale Commercial Deployment

    Directory of Open Access Journals (Sweden)

    Xiaobo Luo

    2017-04-01

    Full Text Available Carbon capture and storage (CCS technology will play a critical role in reducing anthropogenic carbon dioxide (CO2 emission from fossil-fired power plants and other energy-intensive processes. However, the increment of energy cost caused by equipping a carbon capture process is the main barrier to its commercial deployment. To reduce the capital and operating costs of carbon capture, great efforts have been made to achieve optimal design and operation through process modeling, simulation, and optimization. Accurate models form an essential foundation for this purpose. This paper presents a study on developing a more accurate rate-based model in Aspen Plus® for the monoethanolamine (MEA-based carbon capture process by multistage model validations. The modeling framework for this process was established first. The steady-state process model was then developed and validated at three stages, which included a thermodynamic model, physical properties calculations, and a process model at the pilot plant scale, covering a wide range of pressures, temperatures, and CO2 loadings. The calculation correlations of liquid density and interfacial area were updated by coding Fortran subroutines in Aspen Plus®. The validation results show that the correlation combination for the thermodynamic model used in this study has higher accuracy than those of three other key publications and the model prediction of the process model has a good agreement with the pilot plant experimental data. A case study was carried out for carbon capture from a 250 MWe combined cycle gas turbine (CCGT power plant. Shorter packing height and lower specific duty were achieved using this accurate model.

  2. An improved single cell ultrahigh throughput screening method based on in vitro compartmentalization.

    Directory of Open Access Journals (Sweden)

    Fuqiang Ma

    Full Text Available High-throughput screening is a key technique in discovery and engineering of enzymes. In vitro compartmentalization based fluorescence-activated cell sorting (IVC-FACS has recently emerged as a powerful tool for ultrahigh-throughput screening of biocatalysts. However, the accuracy of current IVC-FACS assays is severely limited by the wide polydispersity of micro-reactors generated by homogenizing. Here, an improved protocol based on membrane-extrusion technique was reported to generate the micro-reactors in a more uniform manner. This crucial improvement enables ultrahigh-throughput screening of enzymatic activity at a speed of >10⁸ clones/day with an accuracy that could discriminate as low as two-fold differences in enzymatic activity inside the micro-reactors, which is higher than similar IVC-FACS systems ever have reported. The enzymatic reaction in the micro-reactors has very similar kinetic behavior compared to the bulk reaction system and shows wide dynamic range. By using the modified IVC-FACS, E. coli cells with esterase activity could be enriched 330-fold from large excesses of background cells through a single round of sorting. The utility of this new IVC-FACS system was further illustrated by the directed evolution of thermophilic esterase AFEST. The catalytic activity of the very efficient esterase was further improved by ∼2-fold, resulting in several improved mutants with k(cat/K(M values approaching the diffusion-limited efficiency of ∼10⁸ M⁻¹s⁻¹.

  3. Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer

    Science.gov (United States)

    Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant

    2008-03-01

    Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that

  4. Enhancing the accuracy of subcutaneous glucose sensors: a real-time deconvolution-based approach.

    Science.gov (United States)

    Guerra, Stefania; Facchinetti, Andrea; Sparacino, Giovanni; Nicolao, Giuseppe De; Cobelli, Claudio

    2012-06-01

    Minimally invasive continuous glucose monitoring (CGM) sensors can greatly help diabetes management. Most of these sensors consist of a needle electrode, placed in the subcutaneous tissue, which measures an electrical current exploiting the glucose-oxidase principle. This current is then transformed to glucose levels after calibrating the sensor on the basis of one, or more, self-monitoring blood glucose (SMBG) samples. In this study, we design and test a real-time signal-enhancement module that, cascaded to the CGM device, improves the quality of its output by a proper postprocessing of the CGM signal. In fact, CGM sensors measure glucose in the interstitium rather than in the blood compartment. We show that this distortion can be compensated by means of a regularized deconvolution procedure relying on a linear regression model that can be updated whenever a pair of suitably sampled SMBG references is collected. Tests performed both on simulated and real data demonstrate a significant accuracy improvement of the CGM signal. Simulation studies also demonstrate the robustness of the method against departures from nominal conditions, such as temporal misplacement of the SMBG samples and uncertainty in the blood-to-interstitium glucose kinetic model. Thanks to its online capabilities, the proposed signal-enhancement algorithm can be used to improve the performance of CGM-based real-time systems such as the hypo/hyper glycemic alert generators or the artificial pancreas.

  5. EnzDP: improved enzyme annotation for metabolic network reconstruction based on domain composition profiles.

    Science.gov (United States)

    Nguyen, Nam-Ninh; Srihari, Sriganesh; Leong, Hon Wai; Chong, Ket-Fah

    2015-10-01

    Determining the entire complement of enzymes and their enzymatic functions is a fundamental step for reconstructing the metabolic network of cells. High quality enzyme annotation helps in enhancing metabolic networks reconstructed from the genome, especially by reducing gaps and increasing the enzyme coverage. Currently, structure-based and network-based approaches can only cover a limited number of enzyme families, and the accuracy of homology-based approaches can be further improved. Bottom-up homology-based approach improves the coverage by rebuilding Hidden Markov Model (HMM) profiles for all known enzymes. However, its clustering procedure relies firmly on BLAST similarity score, ignoring protein domains/patterns, and is sensitive to changes in cut-off thresholds. Here, we use functional domain architecture to score the association between domain families and enzyme families (Domain-Enzyme Association Scoring, DEAS). The DEAS score is used to calculate the similarity between proteins, which is then used in clustering procedure, instead of using sequence similarity score. We improve the enzyme annotation protocol using a stringent classification procedure, and by choosing optimal threshold settings and checking for active sites. Our analysis shows that our stringent protocol EnzDP can cover up to 90% of enzyme families available in Swiss-Prot. It achieves a high accuracy of 94.5% based on five-fold cross-validation. EnzDP outperforms existing methods across several testing scenarios. Thus, EnzDP serves as a reliable automated tool for enzyme annotation and metabolic network reconstruction. Available at: www.comp.nus.edu.sg/~nguyennn/EnzDP .

  6. Accuracy improvement of CT reconstruction using tree-structured filter bank

    International Nuclear Information System (INIS)

    Ueda, Kazuhiro; Morimoto, Hiroaki; Morikawa, Yoshitaka; Murakami, Junichi

    2009-01-01

    Accuracy improvement of 'CT reconstruction algorithm using TSFB (Tree-Structured Filter Bank)' that is high-speed CT reconstruction algorithm, was proposed. TSFB method could largely reduce the amount of computation in comparison with the CB (Convolution Backprojection) method, but it was the problem that an artifact occurred in a reconstruction image since reconstruction was performed with disregard to a signal out of the reconstruction domain in stage processing. Also the whole band filter being the component of a two-dimensional synthesis filter was IIR filter and then an artifact occurred at the end of the reconstruction image. In order to suppress these artifacts the proposed method enlarged the processing range by the TSFB method in the domain outside by the width control of the specimen line and line addition to the reconstruction domain outside. And, furthermore, to avoid increase of the amount of computation, the algorithm was proposed such as to decide the needed processing range depending on the number of steps processing with the TSFB and the degree of incline of filter, and then update the position and width of the specimen line to process the needed range. According to the simulation to realize a high-speed and highly accurate CT reconstruction in this way, the quality of the reconstruction image of the proposed method was improved in comparison with the TSFB method and got the same result with the CB method. (T. Tanaka)

  7. The design of visible system for improving the measurement accuracy of imaging points

    Science.gov (United States)

    Shan, Qiu-sha; Li, Gang; Zeng, Luan; Liu, Kai; Yan, Pei-pei; Duan, Jing; Jiang, Kai

    2018-02-01

    It has a widely applications in robot vision and 3D measurement for binocular stereoscopic measurement technology. And the measure precision is an very important factor, especially in 3D coordination measurement, high measurement accuracy is more stringent to the distortion of the optical system. In order to improving the measurement accuracy of imaging points, to reducing the distortion of the imaging points, the optical system must be satisfied the requirement of extra low distortion value less than 0.1#65285;, a transmission visible optical lens was design, which has characteristic of telecentric beam path in image space, adopted the imaging model of binocular stereo vision, and imaged the drone at the finity distance. The optical system was adopted complex double Gauss structure, and put the pupil stop on the focal plane of the latter groups, maked the system exit pupil on the infinity distance, and realized telecentric beam path in image space. The system mainly optical parameter as follows: the system spectrum rangement is visible light wave band, the optical effective length is f '=30mm, the relative aperture is 1/3, and the fields of view is 21°. The final design results show that the RMS value of the spread spots of the optical lens in the maximum fields of view is 2.3μm, which is less than one pixel(3.45μm) the distortion value is less than 0.1%, the system has the advantage of extra low distortion value and avoids the latter image distortion correction; the proposed modulation transfer function of the optical lens is 0.58(@145 lp/mm), the imaging quality of the system is closed to the diffraction limited; the system has simply structure, and can satisfies the requirements of the optical indexes. Ultimately, based on the imaging model of binocular stereo vision was achieved to measuring the drone at the finity distance.

  8. SU-G-JeP3-01: A Method to Quantify Lung SBRT Target Localization Accuracy Based On Digitally Reconstructed Fluoroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Lafata, K; Ren, L; Cai, J; Yin, F [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: To develop a methodology based on digitally-reconstructed-fluoroscopy (DRF) to quantitatively assess target localization accuracy of lung SBRT, and to evaluate using both a dynamic digital phantom and a patient dataset. Methods: For each treatment field, a 10-phase DRF is generated based on the planning 4DCT. Each frame is pre-processed with a morphological top-hat filter, and corresponding beam apertures are projected to each detector plane. A template-matching algorithm based on cross-correlation is used to detect the tumor location in each frame. Tumor motion relative beam aperture is extracted in the superior-inferior direction based on each frame’s impulse response to the template, and the mean tumor position (MTP) is calculated as the average tumor displacement. The DRF template coordinates are then transferred to the corresponding MV-cine dataset, which is retrospectively filtered as above. The treatment MTP is calculated within each field’s projection space, relative to the DRF-defined template. The field’s localization error is defined as the difference between the DRF-derived-MTP (planning) and the MV-cine-derived-MTP (delivery). A dynamic digital phantom was used to assess the algorithm’s ability to detect intra-fractional changes in patient alignment, by simulating different spatial variations in the MV-cine and calculating the corresponding change in MTP. Inter-and-intra-fractional variation, IGRT accuracy, and filtering effects were investigated on a patient dataset. Results: Phantom results demonstrated a high accuracy in detecting both translational and rotational variation. The lowest localization error of the patient dataset was achieved at each fraction’s first field (mean=0.38mm), with Fx3 demonstrating a particularly strong correlation between intra-fractional motion-caused localization error and treatment progress. Filtering significantly improved tracking visibility in both the DRF and MV-cine images. Conclusion: We have

  9. The Single-Molecule Centroid Localization Algorithm Improves the Accuracy of Fluorescence Binding Assays.

    Science.gov (United States)

    Hua, Boyang; Wang, Yanbo; Park, Seongjin; Han, Kyu Young; Singh, Digvijay; Kim, Jin H; Cheng, Wei; Ha, Taekjip

    2018-03-13

    Here, we demonstrate that the use of the single-molecule centroid localization algorithm can improve the accuracy of fluorescence binding assays. Two major artifacts in this type of assay, i.e., nonspecific binding events and optically overlapping receptors, can be detected and corrected during analysis. The effectiveness of our method was confirmed by measuring two weak biomolecular interactions, the interaction between the B1 domain of streptococcal protein G and immunoglobulin G and the interaction between double-stranded DNA and the Cas9-RNA complex with limited sequence matches. This analysis routine requires little modification to common experimental protocols, making it readily applicable to existing data and future experiments.

  10. Real-Time Tropospheric Product Establishment and Accuracy Assessment in China

    Science.gov (United States)

    Chen, M.; Guo, J.; Wu, J.; Song, W.; Zhang, D.

    2018-04-01

    Tropospheric delay has always been an important issue in Global Navigation Satellite System (GNSS) processing. Empirical tropospheric delay models are difficult to simulate complex and volatile atmospheric environments, resulting in poor accuracy of the empirical model and difficulty in meeting precise positioning demand. In recent years, some scholars proposed to establish real-time tropospheric product by using real-time or near-real-time GNSS observations in a small region, and achieved some good results. This paper uses real-time observing data of 210 Chinese national GNSS reference stations to estimate the tropospheric delay, and establishes ZWD grid model in the country wide. In order to analyze the influence of tropospheric grid product on wide-area real-time PPP, this paper compares the method of taking ZWD grid product as a constraint with the model correction method. The results show that the ZWD grid product estimated based on the national reference stations can improve PPP accuracy and convergence speed. The accuracy in the north (N), east (E) and up (U) direction increase by 31.8 %,15.6 % and 38.3 %, respectively. As with the convergence speed, the accuracy of U direction experiences the most improvement.

  11. On the Accuracy of Language Trees

    Science.gov (United States)

    Pompei, Simone; Loreto, Vittorio; Tria, Francesca

    2011-01-01

    Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic) features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve it. PMID:21674034

  12. On the accuracy of language trees.

    Directory of Open Access Journals (Sweden)

    Simone Pompei

    Full Text Available Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve

  13. The Role of Incidental Unfocused Prompts and Recasts in Improving English as a Foreign Language Learners' Accuracy

    Science.gov (United States)

    Rahimi, Muhammad; Zhang, Lawrence Jun

    2016-01-01

    This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…

  14. Framework and implementation for improving physics essential skills via computer-based practice: Vector math

    Science.gov (United States)

    Mikula, Brendon D.; Heckler, Andrew F.

    2017-06-01

    We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with a careful identification of target skills and the study of specific student difficulties with these skills. It then employs computer-based instruction, immediate feedback, mastery grading, and well-researched principles from cognitive psychology such as interleaved training sequences and distributed practice. We implemented this with more than 1500 students over 2 semesters. Students completed the mastery practice for an average of about 13 min /week , for a total of about 2-3 h for the whole semester. Results reveal large (>1 SD ) pretest to post-test gains in accuracy in vector skills, even compared to a control group, and these gains were retained at least 2 months after practice. We also find evidence of improved fluency, student satisfaction, and that awarding regular course credit results in higher participation and higher learning gains than awarding extra credit. In all, we find that simple computer-based mastery practice is an effective and efficient way to improve a set of basic and essential skills for introductory physics.

  15. Target Matching Recognition for Satellite Images Based on the Improved FREAK Algorithm

    Directory of Open Access Journals (Sweden)

    Yantong Chen

    2016-01-01

    Full Text Available Satellite remote sensing image target matching recognition exhibits poor robustness and accuracy because of the unfit feature extractor and large data quantity. To address this problem, we propose a new feature extraction algorithm for fast target matching recognition that comprises an improved feature from accelerated segment test (FAST feature detector and a binary fast retina key point (FREAK feature descriptor. To improve robustness, we extend the FAST feature detector by applying scale space theory and then transform the feature vector acquired by the FREAK descriptor from decimal into binary. We reduce the quantity of data in the computer and improve matching accuracy by using the binary space. Simulation test results show that our algorithm outperforms other relevant methods in terms of robustness and accuracy.

  16. Compact Intraoperative MRI: Stereotactic Accuracy and Future Directions.

    Science.gov (United States)

    Markowitz, Daniel; Lin, Dishen; Salas, Sussan; Kohn, Nina; Schulder, Michael

    2017-01-01

    Intraoperative imaging must supply data that can be used for accurate stereotactic navigation. This information should be at least as accurate as that acquired from diagnostic imagers. The aim of this study was to compare the stereotactic accuracy of an updated compact intraoperative MRI (iMRI) device based on a 0.15-T magnet to standard surgical navigation on a 1.5-T diagnostic scan MRI and to navigation with an earlier model of the same system. The accuracy of each system was assessed using a water-filled phantom model of the brain. Data collected with the new system were compared to those obtained in a previous study assessing the older system. The accuracy of the new iMRI was measured against standard surgical navigation on a 1.5-T MRI using T1-weighted (W) images. The mean error with the iMRI using T1W images was lower than that based on images from the 1.5-T scan (1.24 vs. 2.43 mm). T2W images from the newer iMRI yielded a lower navigation error than those acquired with the prior model (1.28 vs. 3.15 mm). Improvements in magnet design can yield progressive increases in accuracy, validating the concept of compact, low-field iMRI. Avoiding the need for registration between image and surgical space increases navigation accuracy. © 2017 S. Karger AG, Basel.

  17. Simulation-based partial volume correction for dopaminergic PET imaging. Impact of segmentation accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Rong, Ye; Winz, Oliver H. [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; Vernaleken, Ingo [University Hospital Aachen (Germany). Dept. of Psychiatry, Psychotherapy and Psychosomatics; Goedicke, Andreas [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; High Tech Campus, Philips Research Lab., Eindhoven (Netherlands); Mottaghy, Felix M. [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; Maastricht University Medical Center (Netherlands). Dept. of Nuclear Medicine; Rota Kops, Elena [Forschungszentrum Juelich (Germany). Inst. of Neuroscience and Medicine-4

    2015-07-01

    Partial volume correction (PVC) is an essential step for quantitative positron emission tomography (PET). In the present study, PVELab, a freely available software, is evaluated for PVC in {sup 18}F-FDOPA brain-PET, with a special focus on the accuracy degradation introduced by various MR-based segmentation approaches. Methods Four PVC algorithms (M-PVC; MG-PVC; mMG-PVC; and R-PVC) were analyzed on simulated {sup 18}F-FDOPA brain-PET images. MR image segmentation was carried out using FSL (FMRIB Software Library) and SPM (Statistical Parametric Mapping) packages, including additional adaptation for subcortical regions (SPM{sub L}). Different PVC and segmentation combinations were compared with respect to deviations in regional activity values and time-activity curves (TACs) of the occipital cortex (OCC), caudate nucleus (CN), and putamen (PUT). Additionally, the PVC impact on the determination of the influx constant (K{sub i}) was assessed. Results Main differences between tissue-maps returned by three segmentation algorithms were found in the subcortical region, especially at PUT. Average misclassification errors in combination with volume reduction was found to be lowest for SPM{sub L} (PUT < 30%) and highest for FSL (PUT > 70%). Accurate recovery of activity data at OCC is achieved by M-PVC (apparent recovery coefficient varies between 0.99 and 1.10). The other three evaluated PVC algorithms have demonstrated to be more suitable for subcortical regions with MG-PVC and mMG-PVC being less prone to the largest tissue misclassification error simulated in this study. Except for M-PVC, quantification accuracy of K{sub i} for CN and PUT was clearly improved by PVC. Conclusions The regional activity value of PUT was appreciably overcorrected by most of the PVC approaches employing FSL or SPM segmentation, revealing the importance of accurate MR image segmentation for the presented PVC framework. The selection of a PVC approach should be adapted to the anatomical

  18. Forecasting method in multilateration accuracy based on laser tracker measurement

    International Nuclear Information System (INIS)

    Aguado, Sergio; Santolaria, Jorge; Samper, David; José Aguilar, Juan

    2017-01-01

    Multilateration based on a laser tracker (LT) requires the measurement of a set of points from three or more positions. Although the LTs’ angular information is not used, multilateration produces a volume of measurement uncertainty. This paper presents two new coefficients from which to determine whether the measurement of a set of points, before performing the necessary measurements, will improve or worsen the accuracy of the multilateration results, avoiding unnecessary measurement, and reducing the time and economic cost required. The first specific coefficient measurement coefficient (MC LT ) is unique for each laser tracker. It determines the relationship between the radial and angular laser tracker measurement noise. Similarly, the second coefficient is related with specific conditions of measurement β . It is related with the spatial angle between the laser tracker positions α and its effect on error reduction. Both parameters MC LT and β are linked in error reduction limits. Beside these, a new methodology to determine the multilateration reduction limit according to the multilateration technique of an ideal laser tracker distribution and a random one are presented. It provides general rules and advice from synthetic tests that are validated through a real test carried out in a coordinate measurement machine. (paper)

  19. Improving accuracy of breast cancer biomarker testing in India

    Directory of Open Access Journals (Sweden)

    Tanuja Shet

    2017-01-01

    Full Text Available There is a global mandate even in countries with low resources to improve the accuracy of testing biomarkers in breast cancer viz. oestrogen receptor (ER, progesterone receptor (PR and human epidermal growth factor receptor 2 (HER2neu given their critical impact in the management of patients. The steps taken include compulsory participation in an external quality assurance (EQA programme, centralized testing, and regular performance audits for laboratories. This review addresses the status of ER/PR and HER2neu testing in India and possible reasons for the delay in development of guidelines and mandate for testing in the country. The chief cause of erroneous ER and PR testing in India continues to be easily correctable issues such as fixation and antigen retrieval, while for HER2neu testing, it is the use of low-cost non-validated antibodies and interpretative errors. These deficiencies can however, be rectified by (i distributing the accountability and responsibility to surgeons and oncologist, (ii certification of centres for testing in oncology, and (iii initiation of a national EQA system (EQAS programme that will help with economical solutions and identifying the centres of excellence and instill a system for reprimand of poorly performing laboratories.

  20. Boosted classification trees result in minor to modest improvement in the accuracy in classifying cardiovascular outcomes compared to conventional classification trees

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S

    2011-01-01

    Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181

  1. Analysis and Compensation for Gear Accuracy with Setting Error in Form Grinding

    Directory of Open Access Journals (Sweden)

    Chenggang Fang

    2015-01-01

    Full Text Available In the process of form grinding, gear setting error was the main factor that influenced the form grinding accuracy; we proposed an effective method to improve form grinding accuracy that corrected the error by controlling the machine operations. Based on establishing the geometry model of form grinding and representing the gear setting errors as homogeneous coordinate, tooth mathematic model was obtained and simplified under the gear setting error. Then, according to the gear standard of ISO1328-1: 1997 and the ANSI/AGMA 2015-1-A01: 2002, the relationship was investigated by changing the gear setting errors with respect to tooth profile deviation, helix deviation, and cumulative pitch deviation, respectively, under the condition of gear eccentricity error, gear inclination error, and gear resultant error. An error compensation method was proposed based on solving sensitivity coefficient matrix of setting error in a five-axis CNC form grinding machine; simulation and experimental results demonstrated that the method can effectively correct the gear setting error, as well as further improving the forming grinding accuracy.

  2. A Smartphone-Based Application Improves the Accuracy, Completeness, and Timeliness of Cattle Disease Reporting and Surveillance in Ethiopia

    Directory of Open Access Journals (Sweden)

    Tariku Jibat Beyene

    2018-01-01

    Full Text Available Accurate disease reporting, ideally in near real time, is a prerequisite to detecting disease outbreaks and implementing appropriate measures for their control. This study compared the performance of the traditional paper-based approach to animal disease reporting in Ethiopia to one using an application running on smartphones. In the traditional approach, the total number of cases for each disease or syndrome was aggregated by animal species and reported to each administrative level at monthly intervals; while in the case of the smartphone application demographic information, a detailed list of presenting signs, in addition to the putative disease diagnosis were immediately available to all administrative levels via a Cloud-based server. While the smartphone-based approach resulted in much more timely reporting, there were delays due to limited connectivity; these ranged on average from 2 days (in well-connected areas up to 13 days (in more rural locations. We outline the challenges that would likely be associated with any widespread rollout of a smartphone-based approach such as the one described in this study but demonstrate that in the long run the approach offers significant benefits in terms of timeliness of disease reporting, improved data integrity and greatly improved animal disease surveillance.

  3. Potential of image-guidance, gating and real-time tracking to improve accuracy in pulmonary stereotactic body radiotherapy

    International Nuclear Information System (INIS)

    Guckenberger, Matthias; Krieger, Thomas; Richter, Anne; Baier, Kurt; Wilbert, Juergen; Sweeney, Reinhart A.; Flentje, Michael

    2009-01-01

    Purpose: To evaluate the potential of image-guidance, gating and real-time tumor tracking to improve accuracy in pulmonary stereotactic body radiotherapy (SBRT). Materials and methods: Safety margins for compensation of inter- and intra-fractional uncertainties of the target position were calculated based on SBRT treatments of 43 patients with pre- and post-treatment cone-beam CT imaging. Safety margins for compensation of breathing motion were evaluated for 17 pulmonary tumors using respiratory correlated CT, model-based segmentation of 4D-CT images and voxel-based dose accumulation; the target in the mid-ventilation position was the reference. Results: Because of large inter-fractional base-line shifts of the tumor, stereotactic patient positioning and image-guidance based on the bony anatomy required safety margins of 12 mm and 9 mm, respectively. Four-dimensional image-guidance targeting the tumor itself and intra-fractional tumor tracking reduced margins to <5 mm and <3 mm, respectively. Additional safety margins are required to compensate for breathing motion. A quadratic relationship between tumor motion and margins for motion compensation was observed: safety margins of 2.4 mm and 6 mm were calculated for compensation of 10 mm and 20 mm motion amplitudes in cranio-caudal direction, respectively. Conclusion: Four-dimensional image-guidance with pre-treatment verification of the target position and online correction of errors reduced safety margins most effectively in pulmonary SBRT.

  4. Accuracy of subcutaneous continuous glucose monitoring in critically ill adults: improved sensor performance with enhanced calibrations.

    Science.gov (United States)

    Leelarathna, Lalantha; English, Shane W; Thabit, Hood; Caldwell, Karen; Allen, Janet M; Kumareswaran, Kavita; Wilinska, Malgorzata E; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L; Burnstein, Rowan; Hovorka, Roman

    2014-02-01

    Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle(®) Navigator(®) (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m(2); mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6-19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1-6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. In total, 1,060 CGM-ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122-213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non-critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements.

  5. Improved personalized recommendation based on a similarity network

    Science.gov (United States)

    Wang, Ximeng; Liu, Yun; Xiong, Fei

    2016-08-01

    A recommender system helps individual users find the preferred items rapidly and has attracted extensive attention in recent years. Many successful recommendation algorithms are designed on bipartite networks, such as network-based inference or heat conduction. However, most of these algorithms define the resource-allocation methods for an average allocation. That is not reasonable because average allocation cannot indicate the user choice preference and the influence between users which leads to a series of non-personalized recommendation results. We propose a personalized recommendation approach that combines the similarity function and bipartite network to generate a similarity network that improves the resource-allocation process. Our model introduces user influence into the recommender system and states that the user influence can make the resource-allocation process more reasonable. We use four different metrics to evaluate our algorithms for three benchmark data sets. Experimental results show that the improved recommendation on a similarity network can obtain better accuracy and diversity than some competing approaches.

  6. Increasing imputation and prediction accuracy for Chinese Holsteins using joint Chinese-Nordic reference population

    DEFF Research Database (Denmark)

    Ma, Peipei; Lund, Mogens Sandø; Ding, X

    2015-01-01

    This study investigated the effect of including Nordic Holsteins in the reference population on the imputation accuracy and prediction accuracy for Chinese Holsteins. The data used in this study include 85 Chinese Holstein bulls genotyped with both 54K chip and 777K (HD) chip, 2862 Chinese cows...... was improved slightly when using the marker data imputed based on the combined HD reference data, compared with using the marker data imputed based on the Chinese HD reference data only. On the other hand, when using the combined reference population including 4398 Nordic Holstein bulls, the accuracy...... to increase reference population rather than increasing marker density...

  7. Improved quantification accuracy for duplex real-time PCR detection of genetically modified soybean and maize in heat processed foods

    Directory of Open Access Journals (Sweden)

    CHENG Fang

    2013-04-01

    Full Text Available Real-time PCR technique has been widely used in quantitative GMO detection in recent years.The accuracy of GMOs quantification based on the real-time PCR methods is still a difficult problem,especially for the quantification of high processed samples.To develop the suitable and accurate real-time PCR system for high processed GM samples,we made ameliorations to several real-time PCR parameters,including re-designed shorter target DNA fragment,similar lengths of amplified endogenous and exogenous gene targets,similar GC contents and melting temperatures of PCR primers and TaqMan probes.Also,one Heat-Treatment Processing Model (HTPM was established using soybean flour samples containing GM soybean GTS 40-3-2 to validate the effectiveness of the improved real-time PCR system.Tested results showed that the quantitative bias of GM content in heat processed samples were lowered using the new PCR system.The improved duplex real-time PCR was further validated using processed foods derived from GM soybean,and more accurate GM content values in these foods was also achieved.These results demonstrated that the improved duplex real-time PCR would be quite suitable in quantitative detection of high processed food products.

  8. Improved hybrid information filtering based on limited time window

    Science.gov (United States)

    Song, Wen-Jun; Guo, Qiang; Liu, Jian-Guo

    2014-12-01

    Adopting the entire collecting information of users, the hybrid information filtering of heat conduction and mass diffusion (HHM) (Zhou et al., 2010) was successfully proposed to solve the apparent diversity-accuracy dilemma. Since the recent behaviors are more effective to capture the users' potential interests, we present an improved hybrid information filtering of adopting the partial recent information. We expand the time window to generate a series of training sets, each of which is treated as known information to predict the future links proven by the testing set. The experimental results on one benchmark dataset Netflix indicate that by only using approximately 31% recent rating records, the accuracy could be improved by an average of 4.22% and the diversity could be improved by 13.74%. In addition, the performance on the dataset MovieLens could be preserved by considering approximately 60% recent records. Furthermore, we find that the improved algorithm is effective to solve the cold-start problem. This work could improve the information filtering performance and shorten the computational time.

  9. Understanding the delayed-keyword effect on metacomprehension accuracy.

    Science.gov (United States)

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.

  10. METACOGNITIVE SCAFFOLDS IMPROVE SELF-JUDGMENTS OF ACCURACY IN A MEDICAL INTELLIGENT TUTORING SYSTEM.

    Science.gov (United States)

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S

    2014-03-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one ( N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were selected from the domain of Nodular and Diffuse Dermatitides. Both groups worked with a version of SlideTutor that provided immediate feedback on their actions for two hours before proceeding to solve cases in either the Considering Alternatives or Playback condition. No immediate feedback was provided on actions performed by participants in the scaffolding mode. Measurements included learning gains (pre-test and post-test), as well as metacognitive performance, including Goodman-Kruskal Gamma correlation, bias, and discrimination. Results showed that participants in both conditions improved significantly in terms of their diagnostic scores from pre-test to post-test. More importantly, participants in the Considering Alternatives condition outperformed those in the Playback condition in the accuracy of their confidence judgments and the discrimination of the correctness of their assertions while solving cases. The results suggested that presenting participants with their diagnostic decision paths and highlighting correct and incorrect paths helps them to become more metacognitively accurate in their confidence judgments.

  11. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    OpenAIRE

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spec...

  12. Simulation-based Mastery Learning Improves Cardiac Auscultation Skills in Medical Students

    Science.gov (United States)

    McGaghie, William C.; Cohen, Elaine R.; Kaye, Marsha; Wayne, Diane B.

    2010-01-01

    Background Cardiac auscultation is a core clinical skill. However, prior studies show that trainee skills are often deficient and that clinical experience is not a proxy for competence. Objective To describe a mastery model of cardiac auscultation education and evaluate its effectiveness in improving bedside cardiac auscultation skills. Design Untreated control group design with pretest and posttest. Participants Third-year students who received a cardiac auscultation curriculum and fourth year students who did not. Intervention A cardiac auscultation curriculum consisting of a computer tutorial and a cardiac patient simulator. All third-year students were required to meet or exceed a minimum passing score (MPS) set by an expert panel at posttest. Measurements Diagnostic accuracy with simulated heart sounds and actual patients. Results Trained third-year students (n = 77) demonstrated significantly higher cardiac auscultation accuracy compared to untrained fourth year students (n = 31) in assessment of simulated heart sounds (93.8% vs. 73.9%, p auscultation curriculum consisting of deliberate practice with a computer-based tutorial and a cardiac patient simulator resulted in improved assessment of simulated heart sounds and more accurate examination of actual patients. PMID:20339952

  13. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    Science.gov (United States)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  14. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  15. Electro-encephalogram based brain-computer interface: improved performance by mental practice and concentration skills.

    Science.gov (United States)

    Mahmoudi, Babak; Erfanian, Abbas

    2006-11-01

    Mental imagination is the essential part of the most EEG-based communication systems. Thus, the quality of mental rehearsal, the degree of imagined effort, and mind controllability should have a major effect on the performance of electro-encephalogram (EEG) based brain-computer interface (BCI). It is now well established that mental practice using motor imagery improves motor skills. The effects of mental practice on motor skill learning are the result of practice on central motor programming. According to this view, it seems logical that mental practice should modify the neuronal activity in the primary sensorimotor areas and consequently change the performance of EEG-based BCI. For developing a practical BCI system, recognizing the resting state with eyes opened and the imagined voluntary movement is important. For this purpose, the mind should be able to focus on a single goal for a period of time, without deviation to another context. In this work, we are going to examine the role of mental practice and concentration skills on the EEG control during imaginative hand movements. The results show that the mental practice and concentration can generally improve the classification accuracy of the EEG patterns. It is found that mental training has a significant effect on the classification accuracy over the primary motor cortex and frontal area.

  16. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  17. Exploring the genetic architecture and improving genomic prediction accuracy for mastitis and milk production traits in dairy cattle by mapping variants to hepatic transcriptomic regions responsive to intra-mammary infection.

    Science.gov (United States)

    Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter

    2017-05-12

    A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set

  18. A simple and efficient methodology to improve geometric accuracy in gamma knife radiation surgery: implementation in multiple brain metastases.

    Science.gov (United States)

    Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis

    2014-12-01

    To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, "average" image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Sensitivity of Tumor Motion Simulation Accuracy to Lung Biomechanical Modeling Approaches and Parameters

    OpenAIRE

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional com...

  20. Prediction of earth rotation parameters based on improved weighted least squares and autoregressive model

    Directory of Open Access Journals (Sweden)

    Sun Zhangzhen

    2012-08-01

    Full Text Available In this paper, an improved weighted least squares (WLS, together with autoregressive (AR model, is proposed to improve prediction accuracy of earth rotation parameters(ERP. Four weighting schemes are developed and the optimal power e for determination of the weight elements is studied. The results show that the improved WLS-AR model can improve the ERP prediction accuracy effectively, and for different prediction intervals of ERP, different weight scheme should be chosen.

  1. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  2. Technique for Increasing Accuracy of Positioning System of Machine Tools

    Directory of Open Access Journals (Sweden)

    Sh. Ji

    2014-01-01

    Full Text Available The aim of research is to improve the accuracy of positioning and processing system using a technique for optimization of pressure diagrams of guides in machine tools. The machining quality is directly related to its accuracy, which characterizes an impact degree of various errors of machines. The accuracy of the positioning system is one of the most significant machining characteristics, which allow accuracy evaluation of processed parts.The literature describes that the working area of the machine layout is rather informative to characterize the effect of the positioning system on the macro-geometry of the part surfaces to be processed. To enhance the static accuracy of the studied machine, in principle, two groups of measures are possible. One of them points toward a decrease of the cutting force component, which overturns the slider moments. Another group of measures is related to the changing sizes of the guide facets, which may lead to their profile change.The study was based on mathematical modeling and optimization of the cutting zone coordinates. And we find the formula to determine the surface pressure of the guides. The selected parameters of optimization are vectors of the cutting force and values of slides and guides. Obtained results show that a technique for optimization of coordinates in the cutting zone was necessary to increase a processing accuracy.The research has established that to define the optimal coordinates of the cutting zone we have to change the sizes of slides, value and coordinates of applied forces, reaching the pressure equalization and improving the accuracy of positioning system of machine tools. In different points of the workspace a vector of forces is applied, pressure diagrams are found, which take into account the changes in the parameters of positioning system, and the pressure diagram equalization to provide the most accuracy of machine tools is achieved.

  3. Lessons learned from accuracy assessment of IAEA-SPE-4 experiment predictions

    International Nuclear Information System (INIS)

    Prosek, A.

    2002-01-01

    The use of methods for code accuracy assessment has strongly increased in the last years. The methods suitable to provide quantitative comparison between the thermalhydraulic code predictions and experimental measurements were proposed e.g. fast Fourier transform based method (FFTBM), stochastic approximation ratio based method (SARBM) and a few methods used in the frame of the recently developed automated code assessment program (ACAP). Further, in the frame of FFTBM also a procedure to quantify the whole calculation was proposed with averaging of the results. The problem is that averaging may hide discrepancies highlighted in the qualitative analysis when only quantitative results are published. The purpose of the study was therefore to propose additional accuracy measures. New proposed measures were tested with IAEA-SPE-4 pre- and post-test predictions. The obtained results showed that the proposed measures improve the whole picture of the code accuracy. This is important when the reader is not provided with the accompanied qualitative analysis. The study shows that proposed accuracy measures efficiently increase the confidence in the quantitative results.(author)

  4. EpCAM-based flow cytometry in cerebrospinal fluid greatly improves diagnostic accuracy of leptomeningeal metastases from epithelial tumors

    NARCIS (Netherlands)

    Milojkovic Kerklaan, B.; Pluim, Dick; Bol, Mijke; Hofland, Ingrid; Westerga, Johan; van Tinteren, Harm; Beijnen, Jos H; Boogerd, Willem; Schellens, Jan H M; Brandsma, Dieta

    BACKGROUND: Moderate diagnostic accuracy of MRI and initial cerebrospinal fluid (CSF) cytology analysis results in at least 10%-15% false negative diagnoses of leptomeningeal metastases (LM) of solid tumors, thus postponing start of therapy. The aim of this prospective clinical study was to

  5. Intellijoint HIP®: a 3D mini-optical navigation tool for improving intraoperative accuracy during total hip arthroplasty

    Directory of Open Access Journals (Sweden)

    Paprosky WG

    2016-11-01

    Full Text Available Wayne G Paprosky,1,2 Jeffrey M Muir3 1Department of Orthopedics, Section of Adult Joint Reconstruction, Department of Orthopedics, Rush University Medical Center, Rush–Presbyterian–St Luke’s Medical Center, Chicago, 2Central DuPage Hospital, Winfield, IL, USA; 3Intellijoint Surgical, Inc, Waterloo, ON, Canada Abstract: Total hip arthroplasty is an increasingly common procedure used to address degenerative changes in the hip joint due to osteoarthritis. Although generally associated with good results, among the challenges associated with hip arthroplasty are accurate measurement of biomechanical parameters such as leg length, offset, and cup position, discrepancies of which can lead to significant long-term consequences such as pain, instability, neurological deficits, dislocation, and revision surgery, as well as patient dissatisfaction and, increasingly, litigation. Current methods of managing these parameters are limited, with manual methods such as outriggers or calipers being used to monitor leg length; however, these are susceptible to small intraoperative changes in patient position and are therefore inaccurate. Computer-assisted navigation, while offering improved accuracy, is expensive and cumbersome, in addition to adding significantly to procedural time. To address the technological gap in hip arthroplasty, a new intraoperative navigation tool (Intellijoint HIP® has been developed. This innovative, 3D mini-optical navigation tool provides real-time, intraoperative data on leg length, offset, and cup position and allows for improved accuracy and precision in component selection and alignment. Benchtop and simulated clinical use testing have demonstrated excellent accuracy, with the navigation tool able to measure leg length and offset to within <1 mm and cup position to within <1° in both anteversion and inclination. This study describes the indications, procedural technique, and early accuracy results of the Intellijoint HIP

  6. State of charge estimation of lithium-ion batteries based on an improved parameter identification method

    International Nuclear Information System (INIS)

    Xia, Bizhong; Chen, Chaoren; Tian, Yong; Wang, Mingwang; Sun, Wei; Xu, Zhihui

    2015-01-01

    The SOC (state of charge) is the most important index of the battery management systems. However, it cannot be measured directly with sensors and must be estimated with mathematical techniques. An accurate battery model is crucial to exactly estimate the SOC. In order to improve the model accuracy, this paper presents an improved parameter identification method. Firstly, the concept of polarization depth is proposed based on the analysis of polarization characteristics of the lithium-ion batteries. Then, the nonlinear least square technique is applied to determine the model parameters according to data collected from pulsed discharge experiments. The results show that the proposed method can reduce the model error as compared with the conventional approach. Furthermore, a nonlinear observer presented in the previous work is utilized to verify the validity of the proposed parameter identification method in SOC estimation. Finally, experiments with different levels of discharge current are carried out to investigate the influence of polarization depth on SOC estimation. Experimental results show that the proposed method can improve the SOC estimation accuracy as compared with the conventional approach, especially under the conditions of large discharge current. - Highlights: • The polarization characteristics of lithium-ion batteries are analyzed. • The concept of polarization depth is proposed to improve model accuracy. • A nonlinear least square technique is applied to determine the model parameters. • A nonlinear observer is used as the SOC estimation algorithm. • The validity of the proposed method is verified by experimental results.

  7. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  8. PCA3 and PCA3-Based Nomograms Improve Diagnostic Accuracy in Patients Undergoing First Prostate Biopsy

    Directory of Open Access Journals (Sweden)

    Virginie Vlaeminck-Guillem

    2013-08-01

    Full Text Available While now recognized as an aid to predict repeat prostate biopsy outcome, the urinary PCA3 (prostate cancer gene 3 test has also been recently advocated to predict initial biopsy results. The objective is to evaluate the performance of the PCA3 test in predicting results of initial prostate biopsies and to determine whether its incorporation into specific nomograms reinforces its diagnostic value. A prospective study included 601 consecutive patients addressed for initial prostate biopsy. The PCA3 test was performed before ≥12-core initial prostate biopsy, along with standard risk factor assessment. Diagnostic performance of the PCA3 test was evaluated. The three available nomograms (Hansen’s and Chun’s nomograms, as well as the updated Prostate Cancer Prevention Trial risk calculator; PCPT were applied to the cohort, and their predictive accuracies were assessed in terms of biopsy outcome: the presence of any prostate cancer (PCa and high-grade prostate cancer (HGPCa. The PCA3 score provided significant predictive accuracy. While the PCPT risk calculator appeared less accurate; both Chun’s and Hansen’s nomograms provided good calibration and high net benefit on decision curve analyses. When applying nomogram-derived PCa probability thresholds ≤30%, ≤6% of HGPCa would have been missed, while avoiding up to 48% of unnecessary biopsies. The urinary PCA3 test and PCA3-incorporating nomograms can be considered as reliable tools to aid in the initial biopsy decision.

  9. Improved mass-measurement accuracy using a PNB Load Cell Scale

    International Nuclear Information System (INIS)

    Suda, S.; Pontius, P.; Schoonover, R.

    1981-08-01

    The PNB Load Cell Scale is a Preloaded, Narrow-Band calibration mass comparator. It consists of (1) a frame and servo-mechanism that maintains a preload tension on the load cell until the load, an unknown mass, is sensed, and (2) a null-balance digital instrument that suppresses the cell response associated with the preload, thereby improving the precision and accuracy of the measurements. Ideally, the objects used to set the preload should be replica mass standards that closely approximate the density and mass of the unknowns. The advantages of the PNB scale are an expanded output signal over the range of interest which increases both the sensitivity and resolution, and minimizes the transient effects associated with loading of load cells. An area of immediate and practical application of this technique to nuclear material safeguards is the weighing of UF 6 cyliners where in-house mass standards are currently available and where the mass values are typically assigned on the basis of comparison weighings. Several prototypical versions of the PNB scale have been assembled at the US National Bureau of Standards. A description of the instrumentation, principles of measurements, and applications are presented in this paper

  10. Machine learning improves the accuracy of myocardial perfusion scintigraphy results

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Objective: Machine learning (ML) an artificial intelligence method has in last decade proved to be an useful tool in many fields of decision making, also in some fields of medicine. By reports, its decision accuracy usually exceeds the human one. Aim: To assess applicability of ML in interpretation of the stress myocardial perfusion scintigraphy results in coronary artery disease diagnostic process. Patients and methods: The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy of the investigation were computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate with whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy of scintigraphy were expressed in this way. The results of both decision procedures were compared. Conclusion: Using ML method, 19 more patients out of 327 (5.8%) were correctly diagnosed by stress myocardial perfusion scintigraphy. In this way ML could be an important tool for myocardial perfusion scintigraphy decision making

  11. Accuracy Assessment of Digital Surface Models Based on WorldView-2 and ADS80 Stereo Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Christian Ginzler

    2012-05-01

    Full Text Available Digital surface models (DSMs are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs. Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS. With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors < 1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling.

  12. A generalized polynomial chaos based ensemble Kalman filter with high accuracy

    International Nuclear Information System (INIS)

    Li Jia; Xiu Dongbin

    2009-01-01

    As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.

  13. Singing Video Games May Help Improve Pitch-Matching Accuracy

    Science.gov (United States)

    Paney, Andrew S.

    2015-01-01

    The purpose of this study was to investigate the effect of singing video games on the pitch-matching skills of undergraduate students. Popular games like "Rock Band" and "Karaoke Revolutions" rate players' singing based on the correctness of the frequency of their sung response. Players are motivated to improve their…

  14. Microbiogical data, but not procalcitonin improve the accuracy of the clinical pulmonary infection score.

    Science.gov (United States)

    Jung, Boris; Embriaco, Nathalie; Roux, François; Forel, Jean-Marie; Demory, Didier; Allardet-Servent, Jérôme; Jaber, Samir; La Scola, Bernard; Papazian, Laurent

    2010-05-01

    Early and adequate treatment of ventilator-associated pneumonia (VAP) is mandatory to improve the outcome. The aim of this study was to evaluate, in medical ICU patients, the respective and combined impact of the Clinical Pulmonary Infection Score (CPIS), broncho-alveolar lavage (BAL) gram staining, endotracheal aspirate and a biomarker (procalcitonin) for the early diagnosis of VAP. Prospective, observational study A medical intensive care unit in a teaching hospital. Over an 8-month period, we prospectively included 57 patients suspected of having 86 episodes of VAP. The day of suspicion, a BAL as well as alveolar and serum procalcitonin determinations and evaluation of CPIS were performed. Of 86 BAL performed, 48 were considered positive (cutoff of 10(4) cfu ml(-1)). We found no differences in alveolar or serum procalcitonin between VAP and non-VAP patients. Including procalcitonin in the CPIS score did not increase its accuracy (55%) for the diagnosis of VAP. The best tests to predict VAP were modified CPIS (threshold at 6) combined with microbiological data. Indeed, both routinely twice weekly performed endotracheal aspiration at a threshold of 10(5) cfu ml(-1) and BAL gram staining improved pre-test diagnostic accuracy of VAP (77 and 66%, respectively). This study showed that alveolar procalcitonin performed by BAL does not help the clinician to identify VAP. It confirmed that serum procalcitonin is not an accurate marker of VAP. In contrast, microbiological resources available at the time of VAP suspicion (BAL gram staining, last available endotracheal aspirate) combined or not with CPIS are helpful in distinguishing VAP diagnosed by BAL from patients with a negative BAL.

  15. Modified compensation algorithm of lever-arm effect and flexural deformation for polar shipborne transfer alignment based on improved adaptive Kalman filter

    International Nuclear Information System (INIS)

    Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei

    2017-01-01

    Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region. (paper)

  16. Fingerprint Identification Using SIFT-Based Minutia Descriptors and Improved All Descriptor-Pair Matching

    Directory of Open Access Journals (Sweden)

    Jiuqiang Han

    2013-03-01

    Full Text Available The performance of conventional minutiae-based fingerprint authentication algorithms degrades significantly when dealing with low quality fingerprints with lots of cuts or scratches. A similar degradation of the minutiae-based algorithms is observed when small overlapping areas appear because of the quite narrow width of the sensors. Based on the detection of minutiae, Scale Invariant Feature Transformation (SIFT descriptors are employed to fulfill verification tasks in the above difficult scenarios. However, the original SIFT algorithm is not suitable for fingerprint because of: (1 the similar patterns of parallel ridges; and (2 high computational resource consumption. To enhance the efficiency and effectiveness of the algorithm for fingerprint verification, we propose a SIFT-based Minutia Descriptor (SMD to improve the SIFT algorithm through image processing, descriptor extraction and matcher. A two-step fast matcher, named improved All Descriptor-Pair Matching (iADM, is also proposed to implement the 1:N verifications in real-time. Fingerprint Identification using SMD and iADM (FISiA achieved a significant improvement with respect to accuracy in representative databases compared with the conventional minutiae-based method. The speed of FISiA also can meet real-time requirements.

  17. An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM

    Science.gov (United States)

    Wang, Juan

    2018-03-01

    The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.

  18. Improvement in reliability and accuracy of heater tube eddy current testing by integration with an appropriate destructive test

    International Nuclear Information System (INIS)

    Giovanelli, F.; Gabiccini, S.; Tarli, R.; Motta, P.

    1988-01-01

    A specially developed destructive test is described showing how the reliability and accuracy of a non-destructive technique can be improved if it is suitably accompanied by an appropriate destructive test. The experiment was carried out on samples of AISI 304L tubes from the low-pressure (LP) preheaters of a BWR 900 MW nuclear plant. (author)

  19. A New Image Processing Procedure Integrating PCI-RPC and ArcGIS-Spline Tools to Improve the Orthorectification Accuracy of High-Resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Hongying Zhang

    2016-10-01

    Full Text Available Given the low accuracy of the traditional remote sensing image processing software when orthorectifying satellite images that cover mountainous areas, and in order to make a full use of mutually compatible and complementary characteristics of the remote sensing image processing software PCI-RPC (Rational Polynomial Coefficients and ArcGIS-Spline, this study puts forward a new operational and effective image processing procedure to improve the accuracy of image orthorectification. The new procedure first processes raw image data into an orthorectified image using PCI with RPC model (PCI-RPC, and then the orthorectified image is further processed using ArcGIS with the Spline tool (ArcGIS-Spline. We used the high-resolution CBERS-02C satellite images (HR1 and HR2 scenes with a pixel size of 2 m acquired from Yangyuan County in Hebei Province of China to test the procedure. In this study, when separately using PCI-RPC and ArcGIS-Spline tools directly to process the HR1/HR2 raw images, the orthorectification accuracies (root mean square errors, RMSEs for HR1/HR2 images were 2.94 m/2.81 m and 4.65 m/4.41 m, respectively. However, when using our newly proposed procedure, the corresponding RMSEs could be reduced to 1.10 m/1.07 m. The experimental results demonstrated that the new image processing procedure which integrates PCI-RPC and ArcGIS-Spline tools could significantly improve image orthorectification accuracy. Therefore, in terms of practice, the new procedure has the potential to use existing software products to easily improve image orthorectification accuracy.

  20. Segmentation of Handwritten Chinese Character Strings Based on improved Algorithm Liu

    Directory of Open Access Journals (Sweden)

    Zhihua Cai

    2014-09-01

    Full Text Available Algorithm Liu attracts high attention because of its high accuracy in segmentation of Japanese postal address. But the disadvantages, such as complexity and difficult implementation of algorithm, etc. have an adverse effect on its popularization and application. In this paper, the author applies the principles of algorithm Liu to handwritten Chinese character segmentation according to the characteristics of the handwritten Chinese characters, based on deeply study on algorithm Liu.In the same time, the author put forward the judgment criterion of Segmentation block classification and adhering mode of the handwritten Chinese characters.In the process of segmentation, text images are seen as the sequence made up of Connected Components (CCs, while the connected components are made up of several horizontal itinerary set of black pixels in image. The author determines whether these parts will be merged into segmentation through analyzing connected components. And then the author does image segmentation through adhering mode based on the analysis of outline edges. Finally cut the text images into character segmentation. Experimental results show that the improved Algorithm Liu obtains high segmentation accuracy and produces a satisfactory segmentation result.

  1. Machine-learning scoring functions to improve structure-based binding affinity prediction and virtual screening.

    Science.gov (United States)

    Ain, Qurrat Ul; Aleksandrova, Antoniya; Roessler, Florian D; Ballester, Pedro J

    2015-01-01

    Docking tools to predict whether and how a small molecule binds to a target can be applied if a structural model of such target is available. The reliability of docking depends, however, on the accuracy of the adopted scoring function (SF). Despite intense research over the years, improving the accuracy of SFs for structure-based binding affinity prediction or virtual screening has proven to be a challenging task for any class of method. New SFs based on modern machine-learning regression models, which do not impose a predetermined functional form and thus are able to exploit effectively much larger amounts of experimental data, have recently been introduced. These machine-learning SFs have been shown to outperform a wide range of classical SFs at both binding affinity prediction and virtual screening. The emerging picture from these studies is that the classical approach of using linear regression with a small number of expert-selected structural features can be strongly improved by a machine-learning approach based on nonlinear regression allied with comprehensive data-driven feature selection. Furthermore, the performance of classical SFs does not grow with larger training datasets and hence this performance gap is expected to widen as more training data becomes available in the future. Other topics covered in this review include predicting the reliability of a SF on a particular target class, generating synthetic data to improve predictive performance and modeling guidelines for SF development. WIREs Comput Mol Sci 2015, 5:405-424. doi: 10.1002/wcms.1225 For further resources related to this article, please visit the WIREs website.

  2. Accuracy of electrocardiogram reading by family practice residents.

    Science.gov (United States)

    Sur, D K; Kaye, L; Mikus, M; Goad, J; Morena, A

    2000-05-01

    This study evaluated the electrocardiogram (EKG) reading skills of family practice residents. A multicenter study was carried out to evaluate the accuracy of EKG reading in the family practice setting. Based on the frequency and potential for clinical significance, we chose 18 common findings on 10 EKGs for evaluation. The EKGs were then distributed to residents at six family practice residencies. Residents were given one point for the identification of each correct EKG finding and scored based on the number correct over a total of 18. Sixty-one residents (20 first year, 23 second year, and 18 third year) completed readings for 10 EKGs and were evaluated for their ability to identify 18 EKG findings. The median score out of 18 possible points for all first-, second-, and third-year residents was 12, 12, and 11.5, respectively. Twenty-one percent of residents did not correctly identify a tracing of an acute myocardial infarction. Data analysis showed no statistically significant difference among the three groups of residents. We evaluated the accuracy of EKG reading skills of family practice residents at each year of training. This study suggests that EKG reading skills do not improve during residency, and further study of curricular change to improve these skills should be considered.

  3. Improved artificial bee colony algorithm based gravity matching navigation method.

    Science.gov (United States)

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  4. The robustness and accuracy of in vivo linear wear measurements for knee prostheses based on model-based RSA.

    Science.gov (United States)

    van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L

    2011-10-13

    Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Sensor-Based Inspection of the Formation Accuracy in Ultra-Precision Grinding (UPG) of Aspheric Surface Considering the Chatter Vibration

    Science.gov (United States)

    Lei, Yao; Bai, Yue; Xu, Zhijun

    2018-03-01

    This paper proposes an experimental approach for monitoring and inspection of the formation accuracy in ultra-precision grinding (UPG) with respect to the chatter vibration. Two factors related to the grinding progress, the grinding speed of grinding wheel and spindle, and the oil pressure of the hydrostatic bearing are taken into account to determining the accuracy. In the meantime, a mathematical model of the radius deviation caused by the micro vibration is also established and applied in the experiments. The results show that the accuracy is sensitive to the vibration and the forming accuracy is much improved with proper processing parameters. It is found that the accuracy of aspheric surface can be less than 4 μm when the grinding speed is 1400 r/min and the wheel speed is 100 r/min with the oil pressure being 1.1 MPa.

  6. Sensor-Based Inspection of the Formation Accuracy in Ultra-Precision Grinding (UPG) of Aspheric Surface Considering the Chatter Vibration

    Science.gov (United States)

    Lei, Yao; Bai, Yue; Xu, Zhijun

    2018-06-01

    This paper proposes an experimental approach for monitoring and inspection of the formation accuracy in ultra-precision grinding (UPG) with respect to the chatter vibration. Two factors related to the grinding progress, the grinding speed of grinding wheel and spindle, and the oil pressure of the hydrostatic bearing are taken into account to determining the accuracy. In the meantime, a mathematical model of the radius deviation caused by the micro vibration is also established and applied in the experiments. The results show that the accuracy is sensitive to the vibration and the forming accuracy is much improved with proper processing parameters. It is found that the accuracy of aspheric surface can be less than 4 μm when the grinding speed is 1400 r/min and the wheel speed is 100 r/min with the oil pressure being 1.1 MPa.

  7. PEMBELAJARAN KALAM BERBASIS PHONETIC ACCURACY UNTUK MENINGKATKAN KEMAMPUAN BERBICARA BAHASA ARAB

    Directory of Open Access Journals (Sweden)

    Kholisin Kholisin

    2016-11-01

    Full Text Available This study aims to develop the the teaching and learning model of kalam (conversation based on phonetic accuracy to improve students’ ability in speaking Arabic. The research was conducted through questionnaires, interviews, error analysis, and content analysis. The results of the study described as follows. There are two forms of phonetic errors in student’s pronunciation, namely the segmental and supra-segmental errors; most of the teachers and students did not care to the phonetic errors on students speech; however, most of them consider that it is important to have a special emphasis on the phonetic element  in learning kalam; the teaching material subjects of kalam used all this time did not emphasize  specifically on phonetic accuracy; and the teachers and students suggested that the textbook of kalam be completed with special training of phonetic accuracy and be done early. Based on these results, the teaching and learning models of kalam need to be designed

  8. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    Science.gov (United States)

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  9. Research on Hotspot Discovery in Internet Public Opinions Based on Improved -Means

    Directory of Open Access Journals (Sweden)

    Gensheng Wang

    2013-01-01

    Full Text Available How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved -means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original -means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original -means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice.

  10. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    Science.gov (United States)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  11. Forecasting space weather: Can new econometric methods improve accuracy?

    Science.gov (United States)

    Reikard, Gordon

    2011-06-01

    Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.

  12. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    International Nuclear Information System (INIS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  13. Improved Prediction of Blood-Brain Barrier Permeability Through Machine Learning with Combined Use of Molecular Property-Based Descriptors and Fingerprints.

    Science.gov (United States)

    Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo

    2018-03-21

    Blood-brain barrier (BBB) permeability of a compound determines whether the compound can effectively enter the brain. It is an essential property which must be accounted for in drug discovery with a target in the brain. Several computational methods have been used to predict the BBB permeability. In particular, support vector machine (SVM), which is a kernel-based machine learning method, has been used popularly in this field. For SVM training and prediction, the compounds are characterized by molecular descriptors. Some SVM models were based on the use of molecular property-based descriptors (including 1D, 2D, and 3D descriptors) or fragment-based descriptors (known as the fingerprints of a molecule). The selection of descriptors is critical for the performance of a SVM model. In this study, we aimed to develop a generally applicable new SVM model by combining all of the features of the molecular property-based descriptors and fingerprints to improve the accuracy for the BBB permeability prediction. The results indicate that our SVM model has improved accuracy compared to the currently available models of the BBB permeability prediction.

  14. The use of imprecise processing to improve accuracy in weather and climate prediction

    Energy Technology Data Exchange (ETDEWEB)

    Düben, Peter D., E-mail: dueben@atm.ox.ac.uk [University of Oxford, Atmospheric, Oceanic and Planetary Physics (United Kingdom); McNamara, Hugh [University of Oxford, Mathematical Institute (United Kingdom); Palmer, T.N. [University of Oxford, Atmospheric, Oceanic and Planetary Physics (United Kingdom)

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce

  15. The use of imprecise processing to improve accuracy in weather and climate prediction

    International Nuclear Information System (INIS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-01-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  16. Enhancement of the spectral selectivity of complex samples by measuring them in a frozen state at low temperatures in order to improve accuracy for quantitative analysis. Part II. Determination of viscosity for lube base oils using Raman spectroscopy.

    Science.gov (United States)

    Kim, Mooeung; Chung, Hoeil

    2013-03-07

    The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed.

  17. Accuracy of Ultrasound-Based (BAT) Prostate-Repositioning: A Three-Dimensional On-Line Fiducial-Based Assessment With Cone-Beam Computed Tomography

    International Nuclear Information System (INIS)

    Boda-Heggemann, Judit; Koehler, Frederick Marc; Kuepper, Beate; Wolff, Dirk; Wertz, Hansjoerg; Mai, Sabine; Hesser, Juergen; Lohr, Frank; Wenz, Frederik

    2008-01-01

    Purpose: To assess the accuracy of ultrasound-based repositioning (BAT) before prostate radiation with fiducial-based three-dimensional matching with cone-beam computed tomography (CBCT). Patients and Methods: Fifty-four positionings in 8 patients with 125 I seeds/intraprostatic calcifications as fiducials were evaluated. Patients were initially positioned according to skin marks and after this according to bony structures based on CBCT. Prostate position correction was then performed with BAT. Residual error after repositioning based on skin marks, bony anatomy, and BAT was estimated by a second CBCT based on user-independent automatic fiducial registration. Results: Overall mean value (MV ± SD) residual error after BAT based on fiducial registration by CBCT was 0.7 ± 1.7 mm in x (group systematic error [M] = 0.5 mm; SD of systematic error [Σ] = 0.8 mm; SD of random error [σ] = 1.4 mm), 0.9 ± 3.3 mm in y (M = 0.5 mm, Σ = 2.2 mm, σ = 2.8 mm), and -1.7 ± 3.4 mm in z (M = -1.7 mm, Σ = 2.3 mm, σ = 3.0 mm) directions, whereas residual error relative to positioning based on skin marks was 2.1 ± 4.6 mm in x (M = 2.6 mm, Σ = 3.3 mm, σ = 3.9 mm), -4.8 ± 8.5 mm in y (M = -4.4 mm, Σ = 3.7 mm, σ = 6.7 mm), and -5.2 ± 3.6 mm in z (M = -4.8 mm, Σ = 1.7 mm, σ = 3.5mm) directions and relative to positioning based on bony anatomy was 0 ± 1.8 mm in x (M = 0.2 mm, Σ = 0.9 mm, σ = 1.1 mm), -3.5 ± 6.8 mm in y (M = -3.0 mm, Σ = 1.8 mm, σ = 3.7 mm), and -1.9 ± 5.2 mm in z (M = -2.0 mm, Σ = 1.3 mm, σ = 4.0 mm) directions. Conclusions: BAT improved the daily repositioning accuracy over skin marks or even bony anatomy. The results obtained with BAT are within the precision of extracranial stereotactic procedures and represent values that can be achieved with several users with different education levels. If sonographic visibility is insufficient, CBCT or kV/MV portal imaging with implanted fiducials are recommended

  18. Improving Agricultural Water Resources Management Using Ground-based Infrared Thermometry

    Science.gov (United States)

    Taghvaeian, S.

    2014-12-01

    Irrigated agriculture is the largest user of freshwater resources in arid/semi-arid parts of the world. Meeting rapidly growing demands in food, feed, fiber, and fuel while minimizing environmental pollution under a changing climate requires significant improvements in agricultural water management and irrigation scheduling. Although recent advances in remote sensing techniques and hydrological modeling has provided valuable information on agricultural water resources and their management, real improvements will only occur if farmers, the decision makers on the ground, are provided with simple, affordable, and practical tools to schedule irrigation events. This presentation reviews efforts in developing methods based on ground-based infrared thermometry and thermography for day-to-day management of irrigation systems. The results of research studies conducted in Colorado and Oklahoma show that ground-based remote sensing methods can be used effectively in quantifying water stress and consequently triggering irrigation events. Crop water use estimates based on stress indices have also showed to be in good agreement with estimates based on other methods (e.g. surface energy balance, root zone soil water balance, etc.). Major challenges toward the adoption of this approach by agricultural producers include the reduced accuracy under cloudy and humid conditions and its inability to forecast irrigation date, which is a critical knowledge since many irrigators need to decide about irrigations a few days in advance.

  19. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    Science.gov (United States)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  20. Improvement in Interobserver Accuracy in Delineation of the Lumpectomy Cavity Using Fiducial Markers

    International Nuclear Information System (INIS)

    Shaikh, Talha; Chen Ting; Khan, Atif; Yue, Ning J.; Kearney, Thomas; Cohler, Alan; Haffty, Bruce G.; Goyal, Sharad

    2010-01-01

    Purpose: To determine, whether the presence of gold fiducial markers would improve the inter- and intraphysician accuracy in the delineation of the surgical cavity compared with a matched group of patients who did not receive gold fiducial markers in the setting of accelerated partial-breast irradiation (APBI). Methods and Materials: Planning CT images of 22 lumpectomy cavities were reviewed in a cohort of 22 patients; 11 patients received four to six gold fiducial markers placed at the time of surgery. Three physicians categorized the seroma cavity according to cavity visualization score criteria and delineated each of the 22 seroma cavities and the clinical target volume. Distance between centers of mass, percentage overlap, and average surface distance for all patients were assessed. Results: The mean seroma volume was 36.9 cm 3 and 34.2 cm 3 for fiducial patients and non-fiducial patients, respectively (p = ns). Fiducial markers improved the mean cavity visualization score, to 3.6 ± 1.0 from 2.5 ± 1.3 (p < 0.05). The mean distance between centers of mass, average surface distance, and percentage overlap for the seroma and clinical target volume were significantly improved in the fiducial marker patients as compared with the non-fiducial marker patients (p < 0.001). Conclusions: The placement of gold fiducial markers placed at the time of lumpectomy improves interphysician identification and delineation of the seroma cavity and clinical target volume. This has implications in radiotherapy treatment planning for accelerated partial-breast irradiation and for boost after whole-breast irradiation.

  1. IMPROVEMENT OF ACCURACY OF RADIATIVE HEAT TRANSFER DIFFERENTIAL APPROXIMATION METHOD FOR MULTI DIMENSIONAL SYSTEMS BY MEANS OF AUTO-ADAPTABLE BOUNDARY CONDITIONS

    Directory of Open Access Journals (Sweden)

    K. V. Dobrego

    2015-01-01

    Full Text Available Differential approximation is derived from radiation transfer equation by averaging over the solid angle. It is one of the more effective methods for engineering calculations of radia- tive heat transfer in complex three-dimensional thermal power systems with selective and scattering media. The new method for improvement of accuracy of the differential approximation based on using of auto-adaptable boundary conditions is introduced in the paper. The  efficiency  of  the  named  method  is  proved  for  the  test  2D-systems.  Self-consistent auto-adaptable boundary conditions taking into consideration the nonorthogonal component of the incident to the boundary radiation flux are formulated. It is demonstrated that taking in- to consideration of the non- orthogonal incident flux in multi-dimensional systems, such as furnaces, boilers, combustion chambers improves the accuracy of the radiant flux simulations and to more extend in the zones adjacent to the edges of the chamber.Test simulations utilizing the differential approximation method with traditional boundary conditions, new self-consistent boundary conditions and “precise” discrete ordinates method were performed. The mean square errors of the resulting radiative fluxes calculated along the boundary of rectangular and triangular test areas were decreased 1.5–2 times by using auto- adaptable boundary conditions. Radiation flux gaps in the corner points of non-symmetric sys- tems are revealed by using auto-adaptable boundary conditions which can not be obtained by using the conventional boundary conditions.

  2. Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.

    Science.gov (United States)

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  3. Accuracy Assessment and Analysis for GPT2

    Directory of Open Access Journals (Sweden)

    YAO Yibin

    2015-07-01

    Full Text Available GPT(global pressure and temperature is a global empirical model usually used to provide temperature and pressure for the determination of tropospheric delay, there are some weakness to GPT, these have been improved with a new empirical model named GPT2, which not only improves the accuracy of temperature and pressure, but also provides specific humidity, water vapor pressure, mapping function coefficients and other tropospheric parameters, and no accuracy analysis of GPT2 has been made until now. In this paper high-precision meteorological data from ECWMF and NOAA were used to test and analyze the accuracy of temperature, pressure and water vapor pressure expressed by GPT2, testing results show that the mean Bias of temperature is -0.59℃, average RMS is 3.82℃; absolute value of average Bias of pressure and water vapor pressure are less than 1 mb, GPT2 pressure has average RMS of 7 mb, and water vapor pressure no more than 3 mb, accuracy is different in different latitudes, all of them have obvious seasonality. In conclusion, GPT2 model has high accuracy and stability on global scale.

  4. Technical accuracy of a neuronavigation system measured with a high-precision mechanical micromanipulator.

    Science.gov (United States)

    Kaus, M; Steinmeier, R; Sporer, T; Ganslandt, O; Fahlbusch, R

    1997-12-01

    This study was designed to determine and evaluate the different system-inherent sources of erroneous target localization of a light-emitting diode (LED)-based neuronavigation system (StealthStation, Stealth Technologies, Boulder, CO). The localization accuracy was estimated by applying a high-precision mechanical micromanipulator to move and exactly locate (+/- 0.1 micron) the pointer at multiple positions in the physical three-dimensional space. The localization error was evaluated by calculating the spatial distance between the (known) LED positions and the LED coordinates measured by the neuronavigator. The results are based on a study of approximately 280,000 independent coordinate measurements. The maximum localization error detected was 0.55 +/- 0.29 mm, with the z direction (distance to the camera array) being the most erroneous coordinate. Minimum localization error was found at a distance of 1400 mm from the central camera (optimal measurement position). Additional error due to 1) mechanical vibrations of the camera tripod (+/- 0.15 mm) and the reference frame (+/- 0.08 mm) and 2) extrapolation of the pointer tip position from the LED coordinates of at least +/- 0.12 mm were detected, leading to a total technical error of 0.55 +/- 0.64 mm. Based on this technical accuracy analysis, a set of handling recommendations is proposed, leading to an improved localization accuracy. The localization error could be reduced by 0.3 +/- 0.15 mm by correct camera positioning (1400 mm distance) plus 0.15 mm by vibration-eliminating fixation of the camera. Correct handling of the probe during the operation may improve the accuracy by up to 0.1 mm.

  5. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    Directory of Open Access Journals (Sweden)

    Yuwei Zhao

    2018-05-01

    Full Text Available Multichannel electroencephalography (EEG is widely used in typical brain-computer interface (BCI systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems.

  6. Accuracy and Efficiency of Recording Pediatric Early Warning Scores Using an Electronic Physiological Surveillance System Compared With Traditional Paper-Based Documentation.

    Science.gov (United States)

    Sefton, Gerri; Lane, Steven; Killen, Roger; Black, Stuart; Lyon, Max; Ampah, Pearl; Sproule, Cathryn; Loren-Gosling, Dominic; Richards, Caitlin; Spinty, Jean; Holloway, Colette; Davies, Coral; Wilson, April; Chean, Chung Shen; Carter, Bernie; Carrol, E D

    2017-05-01

    Pediatric Early Warning Scores are advocated to assist health professionals to identify early signs of serious illness or deterioration in hospitalized children. Scores are derived from the weighting applied to recorded vital signs and clinical observations reflecting deviation from a predetermined "norm." Higher aggregate scores trigger an escalation in care aimed at preventing critical deterioration. Process errors made while recording these data, including plotting or calculation errors, have the potential to impede the reliability of the score. To test this hypothesis, we conducted a controlled study of documentation using five clinical vignettes. We measured the accuracy of vital sign recording, score calculation, and time taken to complete documentation using a handheld electronic physiological surveillance system, VitalPAC Pediatric, compared with traditional paper-based charts. We explored the user acceptability of both methods using a Web-based survey. Twenty-three staff participated in the controlled study. The electronic physiological surveillance system improved the accuracy of vital sign recording, 98.5% versus 85.6%, P < .02, Pediatric Early Warning Score calculation, 94.6% versus 55.7%, P < .02, and saved time, 68 versus 98 seconds, compared with paper-based documentation, P < .002. Twenty-nine staff completed the Web-based survey. They perceived that the electronic physiological surveillance system offered safety benefits by reducing human error while providing instant visibility of recorded data to the entire clinical team.

  7. An object-oriented classification method of high resolution imagery based on improved AdaTree

    International Nuclear Information System (INIS)

    Xiaohe, Zhang; Liang, Zhai; Jixian, Zhang; Huiyong, Sang

    2014-01-01

    With the popularity of the application using high spatial resolution remote sensing image, more and more studies paid attention to object-oriented classification on image segmentation as well as automatic classification after image segmentation. This paper proposed a fast method of object-oriented automatic classification. First, edge-based or FNEA-based segmentation was used to identify image objects and the values of most suitable attributes of image objects for classification were calculated. Then a certain number of samples from the image objects were selected as training data for improved AdaTree algorithm to get classification rules. Finally, the image objects could be classified easily using these rules. In the AdaTree, we mainly modified the final hypothesis to get classification rules. In the experiment with WorldView2 image, the result of the method based on AdaTree showed obvious accuracy and efficient improvement compared with the method based on SVM with the kappa coefficient achieving 0.9242

  8. Improved microarray-based decision support with graph encoded interactome data.

    Directory of Open Access Journals (Sweden)

    Anneleen Daemen

    Full Text Available In the past, microarray studies have been criticized due to noise and the limited overlap between gene signatures. Prior biological knowledge should therefore be incorporated as side information in models based on gene expression data to improve the accuracy of diagnosis and prognosis in cancer. As prior knowledge, we investigated interaction and pathway information from the human interactome on different aspects of biological systems. By exploiting the properties of kernel methods, relations between genes with similar functions but active in alternative pathways could be incorporated in a support vector machine classifier based on spectral graph theory. Using 10 microarray data sets, we first reduced the number of data sources relevant for multiple cancer types and outcomes. Three sources on metabolic pathway information (KEGG, protein-protein interactions (OPHID and miRNA-gene targeting (microRNA.org outperformed the other sources with regard to the considered class of models. Both fixed and adaptive approaches were subsequently considered to combine the three corresponding classifiers. Averaging the predictions of these classifiers performed best and was significantly better than the model based on microarray data only. These results were confirmed on 6 validation microarray sets, with a significantly improved performance in 4 of them. Integrating interactome data thus improves classification of cancer outcome for the investigated microarray technologies and cancer types. Moreover, this strategy can be incorporated in any kernel method or non-linear version of a non-kernel method.

  9. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    Directory of Open Access Journals (Sweden)

    Francisco J Valverde-Albacete

    Full Text Available The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA, a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT, a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  10. On the impact of improved dosimetric accuracy on head and neck high dose rate brachytherapy.

    Science.gov (United States)

    Peppa, Vasiliki; Pappas, Eleftherios; Major, Tibor; Takácsi-Nagy, Zoltán; Pantelis, Evaggelos; Papagiannis, Panagiotis

    2016-07-01

    To study the effect of finite patient dimensions and tissue heterogeneities in head and neck high dose rate brachytherapy. The current practice of TG-43 dosimetry was compared to patient specific dosimetry obtained using Monte Carlo simulation for a sample of 22 patient plans. The dose distributions were compared in terms of percentage dose differences as well as differences in dose volume histogram and radiobiological indices for the target and organs at risk (mandible, parotids, skin, and spinal cord). Noticeable percentage differences exist between TG-43 and patient specific dosimetry, mainly at low dose points. Expressed as fractions of the planning aim dose, percentage differences are within 2% with a general TG-43 overestimation except for the spine. These differences are consistent resulting in statistically significant differences of dose volume histogram and radiobiology indices. Absolute differences of these indices are however small to warrant clinical importance in terms of tumor control or complication probabilities. The introduction of dosimetry methods characterized by improved accuracy is a valuable advancement. It does not appear however to influence dose prescription or call for amendment of clinical recommendations for the mobile tongue, base of tongue, and floor of mouth patient cohort of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard.

    Science.gov (United States)

    Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A

    2016-03-15

    The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.

  12. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    Science.gov (United States)

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Open magnetic resonance imaging using titanium-zirconium needles: improved accuracy for interstitial brachytherapy implants?

    International Nuclear Information System (INIS)

    Popowski, Youri; Hiltbrand, Emile; Joliat, Dominique; Rouzaud, Michel

    2000-01-01

    Purpose: To evaluate the benefit of using an open magnetic resonance (MR) machine and new MR-compatible needles to improve the accuracy of brachytherapy implants in pelvic tumors. Methods and Materials: The open MR machine, foreseen for interventional procedures, allows direct visualization of the pelvic structures that are to be implanted. For that purpose, we have developed MR- and CT-compatible titanium-zirconium (Ti-Zr) brachytherapy needles that allow implantations to be carried out under the magnetic field. In order to test the technical feasibility of this new approach, stainless steel (SS) and Ti-Zr needles were first compared in a tissue-equivalent phantom. In a second step, two patients implanted with Ti-Zr needles in the brachytherapy operating room were scanned in the open MR machine. In a third phase, four patients were implanted directly under open MR control. Results: The artifacts induced by both materials were significantly different, strongly favoring the Ti-Zr needles. The implantation in both first patients confirmed the excellent quality of the pictures obtained with the needles in vivo and showed suboptimal implant geometry in both patients. In the next 4 patients, the tumor could be punctured with excellent accuracy, and the adjacent structures could be easily avoided. Conclusion: We conclude that open MR using MR-compatible needles is a very promising tool in brachytherapy, especially for pelvic tumors

  14. Accuracy in Optical Information Processing

    Science.gov (United States)

    Timucin, Dogan Aslan

    Low computational accuracy is an important obstacle for optical processors which blocks their way to becoming a practical reality and a serious challenger for classical computing paradigms. This research presents a comprehensive solution approach to the problem of accuracy enhancement in discrete analog optical information processing systems. Statistical analysis of a generic three-plane optical processor is carried out first, taking into account the effects of diffraction, interchannel crosstalk, and background radiation. Noise sources included in the analysis are photon, excitation, and emission fluctuations in the source array, transmission and polarization fluctuations in the modulator, and photoelectron, gain, dark, shot, and thermal noise in the detector array. Means and mutual coherence and probability density functions are derived for both optical and electrical output signals. Next, statistical models for a number of popular optoelectronic devices are studied. Specific devices considered here are light-emitting and laser diode sources, an ideal noiseless modulator and a Gaussian random-amplitude-transmittance modulator, p-i-n and avalanche photodiode detectors followed by electronic postprocessing, and ideal free-space geometrical -optics propagation and single-lens imaging systems. Output signal statistics are determined for various interesting device combinations by inserting these models into the general formalism. Finally, based on these special-case output statistics, results on accuracy limitations and enhancement in optical processors are presented. Here, starting with the formulation of the accuracy enhancement problem as (1) an optimal detection problem and (2) as a parameter estimation problem, the potential accuracy improvements achievable via the classical multiple-hypothesis -testing and maximum likelihood and Bayesian parameter estimation methods are demonstrated. Merits of using proper normalizing transforms which can potentially stabilize

  15. Coupling Uncertainties with Accuracy Assessment in Object-Based Slum Detections, Case Study: Jakarta, Indonesia

    NARCIS (Netherlands)

    Pratomo, J.; Kuffer, M.; Martinez, Javier; Kohli, D.

    2017-01-01

    Object-Based Image Analysis (OBIA) has been successfully used to map slums. In general, the occurrence of uncertainties in producing geographic data is inevitable. However, most studies concentrated solely on assessing the classification accuracy and neglecting the inherent uncertainties. Our

  16. A review on the processing accuracy of two-photon polymerization

    Directory of Open Access Journals (Sweden)

    Xiaoqin Zhou

    2015-03-01

    Full Text Available Two-photon polymerization (TPP is a powerful and potential technology to fabricate true three-dimensional (3D micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  17. A review on the processing accuracy of two-photon polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xiaoqin; Hou, Yihong [School of Mechanical Science and Engineering, Jilin University, Changchun, 130022 (China); Lin, Jieqiong, E-mail: linjieqiong@mail.ccut.edu.cn [School of Electromechanical Engineering, Changchun University of Technology, Changchun, 130012 (China)

    2015-03-15

    Two-photon polymerization (TPP) is a powerful and potential technology to fabricate true three-dimensional (3D) micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  18. Improving accuracy of portion-size estimations through a stimulus equivalence paradigm.

    Science.gov (United States)

    Hausman, Nicole L; Borrero, John C; Fisher, Alyssa; Kahng, SungWoo

    2014-01-01

    The prevalence of obesity continues to increase in the United States (Gordon-Larsen, The, & Adair, 2010). Obesity can be attributed, in part, to overconsumption of energy-dense foods. Given that overeating plays a role in the development of obesity, interventions that teach individuals to identify and consume appropriate portion sizes are warranted. Specifically, interventions that teach individuals to estimate portion sizes correctly without the use of aids may be critical to the success of nutrition education programs. The current study evaluated the use of a stimulus equivalence paradigm to teach 9 undergraduate students to estimate portion size accurately. Results suggested that the stimulus equivalence paradigm was effective in teaching participants to make accurate portion size estimations without aids, and improved accuracy was observed in maintenance sessions that were conducted 1 week after training. Furthermore, 5 of 7 participants estimated the target portion size of novel foods during extension sessions. These data extend existing research on teaching accurate portion-size estimations and may be applicable to populations who seek treatment (e.g., overweight or obese children and adults) to teach healthier eating habits. © Society for the Experimental Analysis of Behavior.

  19. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  20. Artificial Neural Network-Based Constitutive Relationship of Inconel 718 Superalloy Construction and Its Application in Accuracy Improvement of Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Junya Lv

    2017-01-01

    Full Text Available The application of accurate constitutive relationship in finite element simulation would significantly contribute to accurate simulation results, which play critical roles in process design and optimization. In this investigation, the true stress-strain data of an Inconel 718 superalloy were obtained from a series of isothermal compression tests conducted in a wide temperature range of 1153–1353 K and strain rate range of 0.01–10 s−1 on a Gleeble 3500 testing machine (DSI, St. Paul, DE, USA. Then the constitutive relationship was modeled by an optimally-constructed and well-trained back-propagation artificial neural network (ANN. The evaluation of the ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of Inconel 718 superalloy. Consequently, the developed ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions and construct the continuous mapping relationship for temperature, strain rate, strain and stress. Finally, the constructed ANN was implanted in a finite element solver though the interface of “URPFLO” subroutine to simulate the isothermal compression tests. The results show that the integration of finite element method with ANN model can significantly promote the accuracy improvement of numerical simulations for hot forming processes.

  1. Diagnostic accuracy of a clinical diagnosis of idiopathic pulmonary fibrosis

    DEFF Research Database (Denmark)

    Walsh, Simon L. F.; Maher, Toby M.; Kolb, Martin

    2017-01-01

    -index.A total of 404 physicians completed the study. Agreement for IPF diagnosis was higher among expert physicians (κw=0.65, IQR 0.53-0.72, pmeetings (κw=0.54, IQR 0.45-0.64, p....0001). The prognostic accuracy of academic physicians with >20 years of experience (C-index=0.72, IQR 0.0-0.73, p=0.229) and non-university hospital physicians with more than 20 years of experience, attending weekly MDT meetings (C-index=0.72, IQR 0.70-0.72, p=0.052), did not differ significantly (p=0.229 and p=0.......052 respectively) from the expert panel (C-index=0.74 IQR 0.72-0.75).Experienced respiratory physicians at university-based institutions diagnose IPF with similar prognostic accuracy to IPF experts. Regular MDT meeting attendance improves the prognostic accuracy of experienced non-university practitioners...

  2. A comparative study of breast cancer diagnosis based on neural network ensemble via improved training algorithms.

    Science.gov (United States)

    Azami, Hamed; Escudero, Javier

    2015-08-01

    Breast cancer is one of the most common types of cancer in women all over the world. Early diagnosis of this kind of cancer can significantly increase the chances of long-term survival. Since diagnosis of breast cancer is a complex problem, neural network (NN) approaches have been used as a promising solution. Considering the low speed of the back-propagation (BP) algorithm to train a feed-forward NN, we consider a number of improved NN trainings for the Wisconsin breast cancer dataset: BP with momentum, BP with adaptive learning rate, BP with adaptive learning rate and momentum, Polak-Ribikre conjugate gradient algorithm (CGA), Fletcher-Reeves CGA, Powell-Beale CGA, scaled CGA, resilient BP (RBP), one-step secant and quasi-Newton methods. An NN ensemble, which is a learning paradigm to combine a number of NN outputs, is used to improve the accuracy of the classification task. Results demonstrate that NN ensemble-based classification methods have better performance than NN-based algorithms. The highest overall average accuracy is 97.68% obtained by NN ensemble trained by RBP for 50%-50% training-test evaluation method.

  3. Research on hotspot discovery in internet public opinions based on improved K-means.

    Science.gov (United States)

    Wang, Gensheng

    2013-01-01

    How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved K-means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original K-means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original K-means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice.

  4. Research on Hotspot Discovery in Internet Public Opinions Based on Improved K-Means

    Science.gov (United States)

    2013-01-01

    How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved K-means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original K-means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original K-means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice. PMID:24106496

  5. Age-related differences in the accuracy of web query-based predictions of influenza-like illness.

    Directory of Open Access Journals (Sweden)

    Alexander Domnich

    Full Text Available Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI. However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age.Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed.Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929. However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0-4 and 25-44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15-24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing.The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI.

  6. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    Directory of Open Access Journals (Sweden)

    Xiaodong Zeng

    2014-01-01

    Full Text Available A weighted accuracy and diversity (WAD method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  7. Aircraft Segmentation in SAR Images Based on Improved Active Shape Model

    Science.gov (United States)

    Zhang, X.; Xiong, B.; Kuang, G.

    2018-04-01

    In SAR image interpretation, aircrafts are the important targets arousing much attention. However, it is far from easy to segment an aircraft from the background completely and precisely in SAR images. Because of the complex structure, different kinds of electromagnetic scattering take place on the aircraft surfaces. As a result, aircraft targets usually appear to be inhomogeneous and disconnected. It is a good idea to extract an aircraft target by the active shape model (ASM), since combination of the geometric information controls variations of the shape during the contour evolution. However, linear dimensionality reduction, used in classic ACM, makes the model rigid. It brings much trouble to segment different types of aircrafts. Aiming at this problem, an improved ACM based on ISOMAP is proposed in this paper. ISOMAP algorithm is used to extract the shape information of the training set and make the model flexible enough to deal with different aircrafts. The experiments based on real SAR data shows that the proposed method achieves obvious improvement in accuracy.

  8. Improving a Deep Learning based RGB-D Object Recognition Model by Ensemble Learning

    DEFF Research Database (Denmark)

    Aakerberg, Andreas; Nasrollahi, Kamal; Heder, Thomas

    2018-01-01

    Augmenting RGB images with depth information is a well-known method to significantly improve the recognition accuracy of object recognition models. Another method to im- prove the performance of visual recognition models is ensemble learning. However, this method has not been widely explored...... in combination with deep convolutional neural network based RGB-D object recognition models. Hence, in this paper, we form different ensembles of complementary deep convolutional neural network models, and show that this can be used to increase the recognition performance beyond existing limits. Experiments...

  9. How does language model size effects speech recognition accuracy for the Turkish language?

    Directory of Open Access Journals (Sweden)

    Behnam ASEFİSARAY

    2016-05-01

    Full Text Available In this paper we aimed at investigating the effect of Language Model (LM size on Speech Recognition (SR accuracy. We also provided details of our approach for obtaining the LM for Turkish. Since LM is obtained by statistical processing of raw text, we expect that by increasing the size of available data for training the LM, SR accuracy will improve. Since this study is based on recognition of Turkish, which is a highly agglutinative language, it is important to find out the appropriate size for the training data. The minimum required data size is expected to be much higher than the data needed to train a language model for a language with low level of agglutination such as English. In the experiments we also tried to adjust the Language Model Weight (LMW and Active Token Count (ATC parameters of LM as these are expected to be different for a highly agglutinative language. We showed that by increasing the training data size to an appropriate level, the recognition accuracy improved on the other hand changes on LMW and ATC did not have a positive effect on Turkish speech recognition accuracy.

  10. A New Approach to High-accuracy Road Orthophoto Mapping Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2011-12-01

    Full Text Available Existing orthophoto map based on satellite photography and aerial photography is not precise enough for road marking. This paper proposes a new approach to high-accuracy orthophoto mapping. The approach uses inverse perspective transformation to process the image information and generates the orthophoto fragment. The offline interpolation algorithm is used to process the location information. It processes the dead reckoning and the EKF location information, and uses the result to transform the fragments to the global coordinate system. At last it uses wavelet transform to divides the image to two frequency bands and uses weighted median algorithm to deal with them separately. The result of experiment shows that the map produced with this method has high accuracy.

  11. Positional Accuracy in Optical Trap-Assisted Nanolithography

    Science.gov (United States)

    Arnold, Craig B.; McLeod, Euan

    2009-03-01

    The ability to directly print patterns on size scales below 100 nm is important for many applications where the production or repair of high resolution and density features are important. Laser-based direct-write methods have the benefit of quickly and easily being able to modify and create structures on existing devices, but feature sizes are conventionally limited by diffraction. In this presentation, we show how to overcome this limit with a new method of probe-based near-field nanopatterning in which we employ a CW laser to optically trap and manipulate dispersed microspheres against a substrate using a 2-d Bessel beam optical trap. A secondary, pulsed nanosecond laser at 355 nm is directed through the bead and used to modify the surface below the microsphere, taking advantage of the near-field enhancement in order to produce materials modification with feature sizes under 100 nm. Here, we analyze the 3-d positioning accuracy of the microsphere through analytic modeling as a function of experimental parameters. The model is verified in all directions for our experimental conditions and is used to predict the conditions required for improved positional accuracy.

  12. New possibilities for improving the accuracy of parameter calculations for cascade gamma-ray decay of heavy nuclei

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.; Khitrov, V.A.; Grigor'ev, E.P.

    2002-01-01

    The level density and radiative strength functions which accurately reproduce the experimental intensity of two- step cascades after thermal neutron capture and the total radiative widths of the compound states were applied to calculate the total γ-ray spectra from the (n,γ) reaction. In some cases, analysis showed far better agreement with experiment and gave insight into possible ways in which these parameters need to be corrected for further improvement of calculation accuracy for the cascade γ-decay of heavy nuclei. (author)

  13. Code accuracy evaluation of ISP 35 calculations based on NUPEC M-7-1 test

    International Nuclear Information System (INIS)

    Auria, F.D.; Oriolo, F.; Leonardi, M.; Paci, S.

    1995-01-01

    Quantitative evaluation of code uncertainties is a necessary step in the code assessment process, above all if best-estimate codes are utilised for licensing purposes. Aiming at quantifying the code accuracy, an integral methodology based on the Fast Fourier Transform (FFT) has been developed at the University of Pisa (DCMN) and has been already applied to several calculations related to primary system test analyses. This paper deals with the first application of the FFT based methodology to containment code calculations based on a hydrogen mixing and distribution test performed in the NUPEC (Nuclear Power Engineering Corporation) facility. It is referred to pre-test and post-test calculations submitted for the International Standard Problem (ISP) n. 35. This is a blind exercise, simulating the effects of steam injection and spray behaviour on gas distribution and mixing. The result of the application of this methodology to nineteen selected variables calculated by ten participants are here summarized, and the comparison (where possible) of the accuracy evaluated for the pre-test and for the post-test calculations of a same user is also presented. (author)

  14. Improved accuracy of component alignment with the implementation of image-free navigation in total knee arthroplasty.

    Science.gov (United States)

    Rosenberger, Ralf E; Hoser, Christian; Quirbach, Sebastian; Attal, Rene; Hennerbichler, Alfred; Fink, Christian

    2008-03-01

    Accuracy of implant positioning and reconstruction of the mechanical leg axis are major requirements for achieving good long-term results in total knee arthroplasty (TKA). The purpose of the present study was to determine whether image-free computer navigation technology has the potential to improve the accuracy of component alignment in TKA cohorts of experienced surgeons immediately and constantly. One hundred patients with primary arthritis of the knee underwent the unilateral total knee arthroplasty. The cohort of 50 TKAs implanted with conventional instrumentation was directly followed by the cohort of the very first 50 computer-assisted TKAs. All surgeries were performed by two senior surgeons. All patients received the Zimmer NexGen total knee prosthesis (Zimmer Inc., Warsaw, IN, USA). There was no variability regarding surgeons or surgical technique, except for the use of the navigation system (StealthStation) Treon plus Medtronic Inc., Minnesota, MI, USA). Accuracy of implant positioning was measured on postoperative long-leg standing radiographs and standard lateral X-rays with regard to the valgus angle and the coronal and sagittal component angles. In addition, preoperative deformities of the mechanical leg axis, tourniquet time, age, and gender were correlated. Statistical analyses were performed using the SPSS 15.0 (SPSS Inc., Chicago, IL, USA) software package. Independent t-tests were used, with significance set at P alignment between the two cohorts. To compute the rate of optimally implanted prostheses between the two groups we used the chi(2) test. The average postoperative radiological frontal mechanical alignment was 1.88 degrees of varus (range 6.1 degrees of valgus-10.1 degrees of varus; SD 3.68 degrees ) in the conventional cohort and 0.28 degrees of varus (range 3.7 degrees -6.0 degrees of varus; SD 1.97 degrees ) in the navigated cohort. Including all criteria for optimal implant alignment, 16 cases (32%) in the conventional cohort and 31

  15. An Improved Method of Pose Estimation for Lighthouse Base Station Extension.

    Science.gov (United States)

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-10-22

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.

  16. Data accuracy assessment using enterprise architecture

    Science.gov (United States)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  17. Controlled Substance Reconciliation Accuracy Improvement Using Near Real-Time Drug Transaction Capture from Automated Dispensing Cabinets.

    Science.gov (United States)

    Epstein, Richard H; Dexter, Franklin; Gratch, David M; Perino, Michael; Magrann, Jerry

    2016-06-01

    Accurate accounting of controlled drug transactions by inpatient hospital pharmacies is a requirement in the United States under the Controlled Substances Act. At many hospitals, manual distribution of controlled substances from pharmacies is being replaced by automated dispensing cabinets (ADCs) at the point of care. Despite the promise of improved accountability, a high prevalence (15%) of controlled substance discrepancies between ADC records and anesthesia information management systems (AIMS) has been published, with a similar incidence (15.8%; 95% confidence interval [CI], 15.3% to 16.2%) noted at our institution. Most reconciliation errors are clerical. In this study, we describe a method to capture drug transactions in near real-time from our ADCs, compare them with documentation in our AIMS, and evaluate subsequent improvement in reconciliation accuracy. ADC-controlled substance transactions are transmitted to a hospital interface server, parsed, reformatted, and sent to a software script written in Perl. The script extracts the data and writes them to a SQL Server database. Concurrently, controlled drug totals for each patient having care are documented in the AIMS and compared with the balance of the ADC transactions (i.e., vending, transferring, wasting, and returning drug). Every minute, a reconciliation report is available to anesthesia providers over the hospital Intranet from AIMS workstations. The report lists all patients, the current provider, the balance of ADC transactions, the totals from the AIMS, the difference, and whether the case is still ongoing or had concluded. Accuracy and latency of the ADC transaction capture process were assessed via simulation and by comparison with pharmacy database records, maintained by the vendor on a central server located remotely from the hospital network. For assessment of reconciliation accuracy over time, data were collected from our AIMS from January 2012 to June 2013 (Baseline), July 2013 to April 2014

  18. Improving the Accuracy of Outdoor Educators' Teaching Self-Efficacy Beliefs through Metacognitive Monitoring

    Science.gov (United States)

    Schumann, Scott; Sibthorp, Jim

    2016-01-01

    Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…

  19. Application of fast fourier transform method to evaluate the accuracy of sbloca data base

    International Nuclear Information System (INIS)

    D'Auria, F.; Galassi, G.M.; Leonardi, M.; Galetti, M.R.

    1997-01-01

    The purpose of this paper is to perform the quantitative accuracy evaluation of a small break LOCA data base and then evaluate the accuracy of RELAP5/MOD2 code i.e. of the ensemble constituted by the code itself, the user, the nodalization and the selected code options, in predicting this kind of transient. In order to achieve this objective, qualitative accuracy evaluation results from several tests performed in 4 facilities (LOBI, SPES, BETHSY and LSTF) are used. The quantitative evaluation is achieved adopting a method developed at University of Pisa, which has capabilities in quantifying the errors in code predictions with respect to the measured experimental signal, using the Fast Fourier Transform; this allows an integral representation of code discrepancies in the frequency domain. The RELAP5/MOD2 code has been extensively used at the University of Pisa and the nodalizations of the 4 facilities have been qualified through the application to several experiments performed in the same facilities. (author)

  20. Test expectancy affects metacomprehension accuracy.

    Science.gov (United States)

    Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D

    2011-06-01

    Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.

  1. Improvement of Accuracy in Flow Immunosensor System by Introduction of Poly-2-[3-(methacryloylaminopropylammonio]ethyl 3-aminopropyl Phosphate

    Directory of Open Access Journals (Sweden)

    Yusuke Fuchiwaki

    2011-01-01

    Full Text Available In order to improve the accuracy of immunosensor systems, poly-2-[3-(methacryloylaminopropylammonio]ethyl 3-aminopropyl phosphate (poly-3MAm3AP, which includes both phosphorylcholine and amino groups, was synthesized and applied to the preparation of antibody-immobilized beads. Acting as an antibody-immobilizing material, poly-3MAm3AP is expected to significantly lower nonspecific adsorption due to the presence of the phosphorylcholine group and recognize large numbers of analytes due to the increase in antibody-immobilizing sites. The elimination of nonspecific adsorption was compared between the formation of a blocking layer on antibody-immobilized beads and the introduction of a material to combine antibody with beads. Determination with specific and nonspecific antibodies was then investigated for the estimation of signal-to-noise ratio. Signal intensities with superior signal-to-noise ratios were obtained when poly-3MAm3AP was introduced. This may be due to the increase in antibody-immobilizing sites and the extended space for antigen-antibody interaction resulting from the electrostatic repulsion of poly-3MAm3AP. Thus, the application of poly-3MAm3AP coatings to immunoassay beads was able to improve the accuracy of flow immunosensor systems.

  2. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions

    International Nuclear Information System (INIS)

    Deeley, M A; Chen, A; Cmelak, A; Malcolm, A; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Datteri, R D; Noble, J; Dawant, B M; Donnelly, E; Moretti, L

    2013-01-01

    Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. (paper)

  3. Evaluation of solitary pulmonary nodules by integrated PET/CT: improved accuracy by FDG uptake pattern and CT findings

    International Nuclear Information System (INIS)

    Joon Young Choi; Kyung Soo Lee; O Jung Kwon; Young Mog Shim; Kyung-Han Lee; Yong Choi; Yearn Seong Choe; Byung-Tae Kim

    2004-01-01

    Objective: FDG PET is useful to differentiate malignancy from benign lesions in the evaluation of solitary pulmonary nodules (SPNs). However, FDG PET showed false positive results in benign inflammatory lesions such as tuberculosis and organizing pneumonia. Furthermore, malignant tumors such as adenocarcinoma (AC) with bronchioloalveolar carcinoma (BAC) type had lower FDG uptake than other cell types of non-small cell lung cancer. We investigated whether FDG uptake pattern and image findings of CT for attenuation correction could improve accuracy for evaluating SPNs over SUV in integrated PET/CT imaging using FDG. Methods: Forty patients (M:F = 23:17, mean age 58.2±9.4 yrs) with non-calcified SPNs (diameter on CT 30 mm, no significant mediastinal node enlargement, no atelectasis) were included. All subjects underwent integrated PET/CT imaging using FDG. One nuclear medicine physician and 1 chest radiologist interpreted the PET and non-contrast CT images for attenuation correction, respectively. On PET images, maximum SUV of SPN was acquired, and FDG uptake pattern was categorized as diffusely increased or heterogeneously increased with upper threshold of window setting adjusted to maximum SUV of each nodule. A radiologist interpreted SPNs as benign or malignant based on CT images with lung and mediastinai window settings blinded to PET findings. Results: On pathological exam, 30 SPNs were confirmed to be malignant (11 AC with non-BAC type, 8 AC with BAC type, 8 squamous cell carcinoma, 1 adenosquamous cell carcinoma, 1 neuroendocrine carcinoma, 1 large cell carcinoma), and 10 were benign (4 tuberculosis, 3 organizing pneumonia, 2 sclerosing pneumocytoma, 1 non-specific inflammation). All 5 nodules with max SUV 7.0 except one with tuberculoma had malignancy. When only nodules with diffusely increased uptake were considered malignant in indeterminate group with max SUV of 4.0 to 7.0, PET could diagnose 5 of 9 malignant nodules with one false positive nodule. In 6 of

  4. The Analysis of Burrows Recognition Accuracy in XINJIANG'S Pasture Area Based on Uav Visible Images with Different Spatial Resolution

    Science.gov (United States)

    Sun, D.; Zheng, J. H.; Ma, T.; Chen, J. J.; Li, X.

    2018-04-01

    The rodent disaster is one of the main biological disasters in grassland in northern Xinjiang. The eating and digging behaviors will cause the destruction of ground vegetation, which seriously affected the development of animal husbandry and grassland ecological security. UAV low altitude remote sensing, as an emerging technique with high spatial resolution, can effectively recognize the burrows. However, how to select the appropriate spatial resolution to monitor the calamity of the rodent disaster is the first problem we need to pay attention to. The purpose of this study is to explore the optimal spatial scale on identification of the burrows by evaluating the impact of different spatial resolution for the burrows identification accuracy. In this study, we shoot burrows from different flight heights to obtain visible images of different spatial resolution. Then an object-oriented method is used to identify the caves, and we also evaluate the accuracy of the classification. We found that the highest classification accuracy of holes, the average has reached more than 80 %. At the altitude of 24 m and the spatial resolution of 1cm, the accuracy of the classification is the highest We have created a unique and effective way to identify burrows by using UAVs visible images. We draw the following conclusion: the best spatial resolution of burrows recognition is 1 cm using DJI PHANTOM-3 UAV, and the improvement of spatial resolution does not necessarily lead to the improvement of classification accuracy. This study lays the foundation for future research and can be extended to similar studies elsewhere.

  5. Merits of using color and shape differentiation to improve the speed and accuracy of drug strength identification on over-the-counter medicines by laypeople.

    Science.gov (United States)

    Hellier, Elizabeth; Tucker, Mike; Kenny, Natalie; Rowntree, Anna; Edworthy, Judy

    2010-09-01

    This study aimed to examine the utility of using color and shape to differentiate drug strength information on over-the-counter medicine packages. Medication errors are an important threat to patient safety, and confusions between drug strengths are a significant source of medication error. A visual search paradigm required laypeople to search for medicine packages of a particular strength from among distracter packages of different strengths, and measures of reaction time and error were recorded. Using color to differentiate drug strength information conferred an advantage on search times and accuracy. Shape differentiation did not improve search times and had only a weak effect on search accuracy. Using color to differentiate drug strength information improves drug strength identification performance. Color differentiation of drug strength information may be a useful way of reducing medication errors and improving patient safety.

  6. Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA

    Energy Technology Data Exchange (ETDEWEB)

    Mareuil, Fabien [Institut Pasteur, Cellule d' Informatique pour la Biologie (France); Malliavin, Thérèse E.; Nilges, Michael; Bardiaux, Benjamin, E-mail: bardiaux@pasteur.fr [Institut Pasteur, Unité de Bioinformatique Structurale, CNRS UMR 3528 (France)

    2015-08-15

    In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD–NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD–NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.

  7. Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    Directory of Open Access Journals (Sweden)

    Deborah A. Kerr

    2015-05-01

    Full Text Available The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24, were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01, detailed description (p < 0.05 and portion size matching (p < 0.05. Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods. The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05 and second recall (10.1% ± 20.8% compared with the known food and beverage items. These results suggest that the adolescents’ accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days.

  8. Improved binary dragonfly optimization algorithm and wavelet packet based non-linear features for infant cry classification.

    Science.gov (United States)

    Hariharan, M; Sindhu, R; Vijean, Vikneswaran; Yazid, Haniza; Nadarajaw, Thiyagar; Yaacob, Sazali; Polat, Kemal

    2018-03-01

    Infant cry signal carries several levels of information about the reason for crying (hunger, pain, sleepiness and discomfort) or the pathological status (asphyxia, deaf, jaundice, premature condition and autism, etc.) of an infant and therefore suited for early diagnosis. In this work, combination of wavelet packet based features and Improved Binary Dragonfly Optimization based feature selection method was proposed to classify the different types of infant cry signals. Cry signals from 2 different databases were utilized. First database contains 507 cry samples of normal (N), 340 cry samples of asphyxia (A), 879 cry samples of deaf (D), 350 cry samples of hungry (H) and 192 cry samples of pain (P). Second database contains 513 cry samples of jaundice (J), 531 samples of premature (Prem) and 45 samples of normal (N). Wavelet packet transform based energy and non-linear entropies (496 features), Linear Predictive Coding (LPC) based cepstral features (56 features), Mel-frequency Cepstral Coefficients (MFCCs) were extracted (16 features). The combined feature set consists of 568 features. To overcome the curse of dimensionality issue, improved binary dragonfly optimization algorithm (IBDFO) was proposed to select the most salient attributes or features. Finally, Extreme Learning Machine (ELM) kernel classifier was used to classify the different types of infant cry signals using all the features and highly informative features as well. Several experiments of two-class and multi-class classification of cry signals were conducted. In binary or two-class experiments, maximum accuracy of 90.18% for H Vs P, 100% for A Vs N, 100% for D Vs N and 97.61% J Vs Prem was achieved using the features selected (only 204 features out of 568) by IBDFO. For the classification of multiple cry signals (multi-class problem), the selected features could differentiate between three classes (N, A & D) with the accuracy of 100% and seven classes with the accuracy of 97.62%. The experimental

  9. Improving Accuracy Estimation of Forest Aboveground Biomass Based on Incorporation of ALOS-2 PALSAR-2 and Sentinel-2A Imagery and Machine Learning: A Case Study of the Hyrcanian Forest Area (Iran

    Directory of Open Access Journals (Sweden)

    Sasan Vafaei

    2018-01-01

    Full Text Available The main objective of this research is to investigate the potential combination of Sentinel-2A and ALOS-2 PALSAR-2 (Advanced Land Observing Satellite -2 Phased Array type L-band Synthetic Aperture Radar-2 imagery for improving the accuracy of the Aboveground Biomass (AGB measurement. According to the current literature, this kind of investigation has rarely been conducted. The Hyrcanian forest area (Iran is selected as the case study. For this purpose, a total of 149 sample plots for the study area were documented through fieldwork. Using the imagery, three datasets were generated including the Sentinel-2A dataset, the ALOS-2 PALSAR-2 dataset, and the combination of the Sentinel-2A dataset and the ALOS-2 PALSAR-2 dataset (Sentinel-ALOS. Because the accuracy of the AGB estimation is dependent on the method used, in this research, four machine learning techniques were selected and compared, namely Random Forests (RF, Support Vector Regression (SVR, Multi-Layer Perceptron Neural Networks (MPL Neural Nets, and Gaussian Processes (GP. The performance of these AGB models was assessed using the coefficient of determination (R2, the root-mean-square error (RMSE, and the mean absolute error (MAE. The results showed that the AGB models derived from the combination of the Sentinel-2A and the ALOS-2 PALSAR-2 data had the highest accuracy, followed by models using the Sentinel-2A dataset and the ALOS-2 PALSAR-2 dataset. Among the four machine learning models, the SVR model (R2 = 0.73, RMSE = 38.68, and MAE = 32.28 had the highest prediction accuracy, followed by the GP model (R2 = 0.69, RMSE = 40.11, and MAE = 33.69, the RF model (R2 = 0.62, RMSE = 43.13, and MAE = 35.83, and the MPL Neural Nets model (R2 = 0.44, RMSE = 64.33, and MAE = 53.74. Overall, the Sentinel-2A imagery provides a reasonable result while the ALOS-2 PALSAR-2 imagery provides a poor result of the forest AGB estimation. The combination of the Sentinel-2A imagery and the ALOS-2 PALSAR-2

  10. Comparing the Accuracy of Copula-Based Multivariate Density Forecasts in Selected Regions of Support

    NARCIS (Netherlands)

    C.G.H. Diks (Cees); V. Panchenko (Valentyn); O. Sokolinskiy (Oleg); D.J.C. van Dijk (Dick)

    2013-01-01

    textabstractThis paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample)

  11. Comparing the accuracy of copula-based multivariate density forecasts in selected regions of support

    NARCIS (Netherlands)

    Diks, C.; Panchenko, V.; Sokolinskiy, O.; van Dijk, D.

    2013-01-01

    This paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample) conditional

  12. Introducing radiology report checklists among residents: adherence rates when suggesting versus requiring their use and early experience in improving accuracy.

    Science.gov (United States)

    Powell, Daniel K; Lin, Eaton; Silberzweig, James E; Kagetsu, Nolan J

    2014-03-01

    To retrospectively compare resident adherence to checklist-style structured reporting for maxillofacial computed tomography (CT) from the emergency department (when required vs. suggested between two programs). To compare radiology resident reporting accuracy before and after introduction of the structured report and assess its ability to decrease the rate of undetected pathology. We introduced a reporting checklist for maxillofacial CT into our dictation software without specific training, requiring it at one program and suggesting it at another. We quantified usage among residents and compared reporting accuracy, before and after counting and categorizing faculty addenda. There was no significant change in resident accuracy in the first few months, with residents acting as their own controls (directly comparing performance with and without the checklist). Adherence to the checklist at program A (where it originated and was required) was 85% of reports compared to 9% of reports at program B (where it was suggested). When using program B as a secondary control, there was no significant difference in resident accuracy with or without using the checklist (comparing different residents using the checklist to those not using the checklist). Our results suggest that there is no automatic value of checklists for improving radiology resident reporting accuracy. They also suggest the importance of focused training, checklist flexibility, and a period of adjustment to a new reporting style. Mandatory checklists were readily adopted by residents but not when simply suggested. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  13. Influence of part orientation on the geometric accuracy in robot-based incremental sheet metal forming

    Science.gov (United States)

    Störkle, Denis Daniel; Seim, Patrick; Thyssen, Lars; Kuhlenkötter, Bernd

    2016-10-01

    This article describes new developments in an incremental, robot-based sheet metal forming process (`Roboforming') for the production of sheet metal components for small lot sizes and prototypes. The dieless kinematic-based generation of the shape is implemented by means of two industrial robots, which are interconnected to a cooperating robot system. Compared to other incremental sheet metal forming (ISF) machines, this system offers high geometrical form flexibility without the need of any part-dependent tools. The industrial application of ISF is still limited by certain constraints, e.g. the low geometrical accuracy. Responding to these constraints, the authors present the influence of the part orientation and the forming sequence on the geometric accuracy. Their influence is illustrated with the help of various experimental results shown and interpreted within this article.

  14. Systematic review of proposed definitions of nocturnal polyuria and population-based evidence of their diagnostic accuracy.

    Science.gov (United States)

    Olesen, Tine Kold; Denys, Marie-Astrid; Vande Walle, Johan; Everaert, Karel

    2018-02-06

    Background Evidence of diagnostic accuracy for proposed definitions of nocturnal polyuria is currently unclear. Purpose Systematic review to determine population-based evidence of the diagnostic accuracy of proposed definitions of nocturnal polyuria based on data from frequency-volume charts. Methods Seventeen pre-specified search terms identified 351 unique investigations published from 1990 to 2016 in BIOSIS, Embase, Embase Alerts, International Pharmaceutical Abstract, Medline, and Cochrane. Thirteen original communications were included in this review based on pre-specified exclusion criteria. Data were extracted from each paper regarding subject age, sex, ethnicity, health status, sample size, data collection methods, and diagnostic discrimination of proposed definitions including sensitivity, specificity, positive and negative predictive value. Results The sample size of study cohorts, participant age, sex, ethnicity, and health status varied considerably in 13 studies reporting on the diagnostic performance of seven different definitions of nocturnal polyuria using frequency-volume chart data from 4968 participants. Most study cohorts were small, mono-ethnic, including only Caucasian males aged 50 or higher with primary or secondary polyuria that were compared to a control group of healthy men without nocturia in prospective or retrospective settings. Proposed definitions had poor discriminatory accuracy in evaluations based on data from subjects independent from the original study cohorts with findings being similar regarding the most widely evaluated definition endorsed by ICS. Conclusions Diagnostic performance characteristics for proposed definitions of nocturnal polyuria show poor to modest discrimination and are not based on sufficient level of evidence from representative, multi-ethnic population-based data from both females and males of all adult ages.

  15. Improved Reliability-Based Optimization with Support Vector Machines and Its Application in Aircraft Wing Design

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2015-01-01

    Full Text Available A new reliability-based design optimization (RBDO method based on support vector machines (SVM and the Most Probable Point (MPP is proposed in this work. SVM is used to create a surrogate model of the limit-state function at the MPP with the gradient information in the reliability analysis. This guarantees that the surrogate model not only passes through the MPP but also is tangent to the limit-state function at the MPP. Then, importance sampling (IS is used to calculate the probability of failure based on the surrogate model. This treatment significantly improves the accuracy of reliability analysis. For RBDO, the Sequential Optimization and Reliability Assessment (SORA is employed as well, which decouples deterministic optimization from the reliability analysis. The improved SVM-based reliability analysis is used to amend the error from linear approximation for limit-state function in SORA. A mathematical example and a simplified aircraft wing design demonstrate that the improved SVM-based reliability analysis is more accurate than FORM and needs less training points than the Monte Carlo simulation and that the proposed optimization strategy is efficient.

  16. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming